How to deliver data quickly
Batching
Pros and cons of batching
How to handle batch requests
- Batching : group messages into a singe request
- Pro:
- Increase throughput: less requests , less overhead , less connections
- Decrease cost(especially for cloud)
- Cons:
- Complex in sender and receiver
- Sender: takes time buffer messages and send out based on time or size; can be hard to implement and configure
- Receiver: process message one by one but what if one message fails? Roll back all of them and let sender send again to reprocess? If not how do we let sender to send the fail messages ?
- Complex in sender and receiver
- How does server handle batch reuqets
- Treat entire request as a single atomic unit -> request succeeds only when all nested operations complete successful
- Treat each nested operation independent and report back failures for each individual operation(common in practice)
- Batch request format
- Set of n request batched together - individual request combine , each has header and body
- List of N resource batch together -> message combine and return a list of successful and failed
- Failed request
- Retry entire batch : perfect if each request is idempotent , no harm to successful request and failed has a chance to retry
- Retry each failed individually
- Another batch request containing only failed individual operations
-
- And 3. Require additional effort for client but doesn't require idempotency
- SQS batch API
- Consumer pull 10 messages
- Delection ack returens to SQS;
- Consumer need to check the response , Successful message are mark processed, failed will not
- When the invisible flag timeout, those message will be processed by this or other consumer again .
- Kafka heavily relies on batching
Compression
Pros and cons
Compression algo and the trade-offs
-
Less bits used to transfer after compression
-
Pros:
- Lower throughput while transmission messages over the network since we have less data to transfer
- Less data to store -> increase storage capacity
- Decrease cost (some service rate on data amount)
-
Application:
- Server compress http data for faster transfer (browser decompression)
- Database : RockDB: SSTable
- Messaging system
-
The bigger the more effective
-
Compression and decompression consume computational resources , but that's ok
-
Types:
- Lossless : permanent delete data for compaction, used in multimedia data , audio video (streaming)
- Lossy : store without losing : HTTP request and response
-
Compression alg trade off
- Compression Speed: important for write heavy app
- DeCompression speed : important for read heavy app
- Compression ratio: important for store data on disk
-
Algorithms
alg Compression Speed Decompression speed Compression ratio Deflate(Gzip): standard for HTTP compression B B A Snappy: created by google, used extensively in google projects like bigTable and map reduce . Many NoSQL dbs supports it. A- A B Zstandard : created by Facebook , widely used in file system and database A- A- A+