Responding to inventory queries quickly

Laten we beginnen. Het is Gratis
of registreren met je e-mailadres
Responding to inventory queries quickly Door Mind Map: Responding to inventory queries quickly

1. Cache solutions

1.1. Caching the network request

1.1.1. Pros

1.1.1.1. Easy to implement

1.1.1.2. Really fast to respond

1.1.2. Cons

1.1.2.1. The cache needs to be invalidated and we need to prefetch data to maintain the benefit of the cache

1.1.3. When to use

1.1.3.1. When you want to implement cache quickly

1.1.3.2. When you can easily identify cacheable queries based on either a URI or headers

1.1.4. When not to use

1.1.4.1. When you cannot differentiate between cacheable vs non-cachable queries with the URI or headers

1.1.5. Implementation complexities

1.1.5.1. Depending on the remote system, you may need to invalidate the cache every time a customer purchases from that service

1.1.5.1.1. Example: A system serves a list of products, each with an offer code.

1.1.5.1.2. This offer code is then used to purchase

1.1.5.1.3. The next customer that searches for a product uses the same offer code, but it has already been used

1.1.5.1.4. When the second customer tries to buy or view more information about the product, we will get a failure

1.1.5.1.5. This can happen when the offer codes are single-use

1.1.5.1.6. We can bypass that by getting a new offer code in the background between the time when the customer views the product and when the customer commits a purchase.

1.1.5.2. Depending on the amount of input variables, you may need to prefetch a lot of queries

1.1.5.2.1. Example: Hotel searches, for every main location, 1-5 day duration and for the next 3 months for 2 adults, one room and each room type

1.1.5.2.2. This results in 30*3 (3 months) * 5 (5 different durations) * 5 (room types) * 10 (locations) => 67500 different search queries that need to be refreshed daily.

1.1.5.2.3. If we add more options, then we quickly increase the amount of queries we need to cache, easily up to millions just for one service

1.1.6. Solution

1.1.6.1. Nuster / External API response caching server

1.2. Storing the data in a database

1.2.1. Pros

1.2.1.1. We control all the data, making it easier to optimize database queries for a fast response result

1.2.1.2. We are able to display availability and book available products even when a remote system goes down.

1.2.2. Cons

1.2.2.1. This is only possible when we can identify remote items explicitly. This is much more difficult when there are no fixed remote identifications like in Carnect

1.2.2.2. This only works when we know how to calculate the prices and we get the amount of available inventory for each product.

1.2.2.3. This is time consuming to set up and maintain

1.2.2.4. If the remote service decides to change what fields they use to calculate prices, you need to identify that on your own and update accordingly. They will not let you know up-front.

1.2.3. When to use

1.2.3.1. When we can identify remote products, see the available inventory and know how the pricing scheme works, then we should use this

1.2.4. When not to use

1.2.4.1. When we are trying to move fast

1.2.4.2. When we don't have any way to identify remote products

1.2.4.3. When we don't know the formula to calculate the prices for the remote product

1.2.5. Implementation complexities

1.2.5.1. We need to maintain a database

1.2.5.2. When there are changes in the remote calculations or what fields are used in calculations, we need to update our importer

1.2.5.3. When the remote service creates new formulas, we need to modify both our database and our codebase to account for that formula

1.3. Caching partial data

1.3.1. Pros

1.3.1.1. We can store simple information that doesn't change often to show the client something else than an empty loading page

1.3.2. Cons

1.3.2.1. Since we're only caching partial data, we might need to fetch other data so this doesn't always save actual time

1.3.3. When to use

1.3.3.1. When we only need partial data for a product and we want to display information quickly to the user while we wait for full data to arrive from a remote system

1.3.3.1.1. An example of this might be to show the right products in the search results while we're waiting for the latest prices for the product to display and sort for the client

1.3.4. When not to use

1.3.4.1. When the data arrives quickly or a better caching solution is a better fit

1.3.5. Implementation complexities

1.3.5.1. We need to separate the data - this is best done in an intermediate system such as the ClientAPI system

1.3.5.2. We need to be careful when we cache to either store the data only for a short period of time, or implement a cache invalidation strategy that enables other services to invalidate data when it changes

2. Issue Statement

2.1. Problems

2.1.1. It can take a long time for customers to see results and be able to book

2.1.2. Sometimes it will take 60 seconds for results to display and for the visitor to be able to compare products

2.1.3. On product pages, you sometimes have to wait for a while until you are able to choose extras

2.1.4. On package product pages, we are unable to search in real-time and present bookable products based on the input from the customer

2.2. Solutions

2.2.1. Create multiple layers of temporary storage and/or cache, from individual provider responses to intermediary caches

2.2.2. Separate requests into multiple parallel requests

2.2.3. Add GraphQL subscriptions to return data from a single request

2.2.3.1. GraphQL Subscriptions

2.2.3.2. This allows the client to only ask once and the server takes care of the rest, delivering results as they come in from various sources

2.3. Complexities added with the solution

2.3.1. With each cache added, we have to add cache invalidation

3. Basic requirements before caching

3.1. Make sure that we are using Jaeger traces for all aspects of the request cycle

3.1.1. We need to map out each network request as it happens.

3.1.2. We need to map out expensive internal functions

3.2. Make sure Jaeger trace context is being picked up and forwarded

3.2.1. When you have an incoming network request, you need to extract and use the trace identifiers

3.2.2. When you are creating an outgoing network request, you need to add the current trace context to the outgoing network request

3.2.2.1. This only applies to internal services. We should not send headers directly to outside services.

3.3. Identify what parts are slow to respond and might benefit from caching

3.3.1. Try to optimize any code or queries before adding any caching layer. Cache needs maintenance