Skip to main content

8 posts tagged with "aws"

View All Tags

Bootstrapping Trust on Kubernetes

· 10 min read
Hunter Fernandes
Software Engineer

We use Kubernetes at $work, and since I am in charge of platform, Kubernetes is my problem. Here's an interesting problem when trying to secure your Kubernetes workload.

Our pods need to talk to each other over the network. Early on, we decided that each pod would receive a unique identity in our application's authentication layer. This provides us maximum auditability -- we can tell exactly which pod performed any given network call. This extends up one level to the service layer as well. With this system, when a pod receives a network call it can tell:

  1. What service it is talking to, and
  2. Which pod it is talking to.

It can then allow or deny specific actions based on identity.

This is a great idea! So how do we implement it? The wrinkle is that when programs start, they have nothing. They know who they are. But how do they prove to other pods who they are?

Proving Identity

For ease of reading, I will name the two pods in this example Alice and Bob. Alice is the pod that wants to prove its identity to Bob.

In general, the way for Alice to prove it is, in fact, Alice is to present something that only that Alice could have. On Kubernetes, by default, a pod is granted a Kubernetes service account (SA) token. This token allows it to communicate with the Kubernetes API server.

So our first option is for Alice to send the SA token to Bob. Bob can inspect and check the SA token against the Kubernetes API server. If the token is valid, Bob knows the caller is Alice.

This is bad because now Bob has Alice's SA token. If Bob were a bad actor (or compromised), then

  • Bob can use the SA token to issue Kubernetes API calls as Alice. Whatever Alice can do, Bob can do too under this scheme!
  • Bob can submit the SA token to other services, which would then think Bob is Alice and allow Bob to act as Alice.

Either case is not acceptable to us. So, we need a way for Alice to prove its identity without giving away the secret to the counterparty.

Early Attempts

For the longest time, we compromised on this point by having a central authentication service (Bob in this example) that had access to read Kubernetes service account tokens.1 Alice would send a hashed version of the SA to Bob, and Bob would look through the Kubernetes service secrets and validate the hash matched what Kubernetes had on record for Alice.

This did not actually solve the problem: now the hash was the valuable McGuffin instead of the SA token. But at least it did reduce the value of the token being exchanged: now if there was a MITM attack between Alice and Bob, the attacker would only get the hash, not the actual SA token. But now Bob needs access to read ALL tokens! Terrible.

A better method is to have a chain of trust. But what is the root of the chain? We already have something that is the root of all trust: the Kubernetes API server.

Unfortunately, the Kubernetes API server did not have a method of issuing tokens that could be used to prove identity safely... until recently.

Token Projection & Review API

Kubernetes 1.20 GA'd Bound Service Account Tokens implemented through Token Projection and the Token Review API. This allows a pod to request a token that the Kuberenetes API server will inject into the pod as a file.

The most important part of this KEP (for our purposes) is the token can be arbitrarily scoped. This means that Alice can request a token that is scoped to only allow it to talk to Bob. Therefore, if Bob were compromised, the attacker would not be able to impersonate Alice to Charlie.

The Token Review API is the counterpart to Token Projection. It allows a pod to submit a token and a scope to the Kubernetes API server for validation. The API server is responsible for checking that the token is trusted and the scopes on the token match the submitted scopes.

This simplifies our wacky hashing scheme and god-mode service and turns it into a simple exchange:

  1. Alice reads the file mounted in the pod.
  2. Alice sends the token to Bob.
  3. Bob submits the token to the Kubernetes API server for validation with the bob scope.
  4. The Kubernetes API server validates the token and the scopes.

Just some file reading and some HTTP requests!

Concrete Example

Let's walk through a concrete example of this in action.

Alice is very simple:

---
kind: ServiceAccount
apiVersion: v1
metadata:
name: alice
namespace: default
---
kind: Pod
apiVersion: v1
metadata:
name: alice-pod
namespace: default
spec:
serviceAccountName: alice
containers:
- name: alice
image: alpine/k8s
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 30; done;"]
volumeMounts:
- name: alice-token
mountPath: /var/run/secrets/hfernandes.com/mytoken
readOnly: true
volumes:
- name: alice-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 7200
audience: bob
---

Bob is a little more complicated. We must give it permission to talk to the Kubernetes Token Review API. Since Token Review is not namespaced, we give it a ClusterRole instead of a Role.

---
kind: ServiceAccount
apiVersion: v1
metadata:
name: bob
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: bob
rules:
- apiGroups: ["authentication.k8s.io"]
resources: ["tokenreviews"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bob
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: bob
subjects:
- kind: ServiceAccount
name: bob
namespace: default
---
# Bob pod
kind: Pod
apiVersion: v1
metadata:
name: bob-pod
namespace: default
spec:
serviceAccountName: bob
containers:
- name: bob
image: alpine/k8s:1.25.15 # Already has kubectl installed
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 30; done;"]
---

Now, if we look in alice-pod we find our token:

$ kubectl exec -it alice-pod -- sh
/ # ls /var/run/secrets/hfernandes.com/mytoken/
token
/ # cat /var/run/secrets/hfernandes.com/mytoken/token
XXXXXXXsecretXXXXXX.YYYYYYYYsecretYYYYYYYYY.ZZZZZZZZZZZZsecretZZZZZZZZZZZZZZ

Let's go into bob-pod and submit this token to the Kubernetes API server for validation:

$ kubectl exec -it bob-pod -- sh
/apps # cat tokenrequest.json
{
"apiVersion": "authentication.k8s.io/v1",
"kind": "TokenReview",
"spec": {
"audiences": ["bob"],
"token": "XXXX"
}
}
/apps # kubectl create --raw '/apis/authentication.k8s.io/v1/tokenreviews?pretty=true' -f tokenrequest.json
{
"kind": "TokenReview",
"apiVersion": "authentication.k8s.io/v1",
"spec": {
"token": "XXXXXXXsecretXXXXXX.YYYYYYYYsecretYYYYYYYYY.ZZZZZZZZZZZZsecretZZZZZZZZZZZZZZ",
"audiences": [
"bob"
]
},
"status": {
"authenticated": true,
"user": {
"username": "system:serviceaccount:default:alice",
"uid": "ffc54e8f-c23b-4a5c-920b-fc729796295d",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:default",
"system:authenticated"
],
"extra": {
"authentication.kubernetes.io/pod-name": [
"alice-pod"
],
"authentication.kubernetes.io/pod-uid": [
"75302c6d-90d6-4299-88e3-3ded4393471a"
]
}
},
"audiences": [
"bob"
]
}
}

What happens if we try to submit the token to the Kubernetes API server with the wrong scope (audience)?

/apps # jq '.spec.audiences[0] = "anotherscope" | .' tokenrequest.json > wrongscope.json
/apps # kubectl create --raw '/apis/authentication.k8s.io/v1/tokenreviews?pretty=true' -f wrongscope.json
{
"kind": "TokenReview",
"apiVersion": "authentication.k8s.io/v1",
"spec": {
"token": "XXXXXXXsecretXXXXXX.YYYYYYYYsecretYYYYYYYYY.ZZZZZZZZZZZZsecretZZZZZZZZZZZZZZ",
"audiences": [
"anotherscope"
]
},
"status": {
"user": {},
"error": "[invalid bearer token, token audiences [\"bob\"] is invalid for the target audiences [\"anotherscope\"], unknown]"
}
}

You can see that the API server rejected the token because the scope was wrong.

What's in the JWT token?

If we decode this JWT, we find:

{
"aud": [
"bob"
],
"exp": 1700115257,
"iat": 1700108057,
"iss": "https://oidc.eks.us-west-2.amazonaws.com/id/XXXXXXXXXXXXXXXXXXX",
"kubernetes.io": {
"namespace": "default",
"pod": {
"name": "alice-pod",
"uid": "75302c6d-90d6-4299-88e3-3ded4393471a"
},
"serviceaccount": {
"name": "alice",
"uid": "ffc54e8f-c23b-4a5c-920b-fc729796295d"
}
},
"nbf": 1700108057,
"sub": "system:serviceaccount:default:alice"
}

Note that the JWT is explicit about the serviceaccount being alice, whereas the Token Review API requires us to parse that out of .status.user.username ("system:serviceaccount:default:alice"). That is kind of annoying. But both clearly contain the pod name.

How fast is the Token Review API?

Let's check our token against the API Server 10 times and see how long it takes -- can we put this API on the hot path?

/apps # for i in $(seq 10); do kubectl create --raw '/apis/authentication.k8s.io/v1/tokenreviews?pretty=true' -f tokenrequest.json -v10 2>&1 | grep ServerProcessing; done
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 3 ms ServerProcessing 5 ms Duration 9 ms
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 4 ms ServerProcessing 2 ms Duration 7 ms
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 6 ms ServerProcessing 1 ms Duration 8 ms
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 4 ms ServerProcessing 4 ms Duration 9 ms
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 3 ms ServerProcessing 1 ms Duration 5 ms
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 3 ms ServerProcessing 1 ms Duration 6 ms
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 3 ms ServerProcessing 1 ms Duration 5 ms
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 3 ms ServerProcessing 1 ms Duration 5 ms
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 3 ms ServerProcessing 1 ms Duration 5 ms
HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 4 ms ServerProcessing 5 ms Duration 10 ms

So typically 1-5ms if you keep a connection to the API server open (and don't have to do the TLS handshake). Not bad!

In our use case, we are only using this to bridge to our main authentication system, so we don't need to do this on the hot path. But it's fast enough we could!

Upgrading Our Interservice Authentication

When this API went GA, I jumped on the opportunity to upgrade our interservice authentication.

Implementing this in our application gave us great benefits. We were able to

  1. greatly simplify our code and logical flow,
  2. ditch our wacky hashing scheme,
  3. remove permissions from the god service that could read all service account token secrets,
  4. and increased security by scoping tokens, which removed an entire class of potential attacks.

In addition to moving to Projected Service Account Tokens, we also added checking that the caller IP address matches the IP address of the pod that the token belongs to. This has the added benefit of preventing outside callers from attempting internal authentication at all.

ECS / Fargate Equivalent?

Kubernetes is notorious for having a fast, grueling upgrade cycle. So I am always keeping my ear to the ground to see how the alternatives are doing.

Two container orchestrators I have high hopes for are AWS Elastic Container Service and Fargate. The issue is I have not found a way to implement fine-grain per-container identity proofs in these systems. If you know of a way, please let me know!

Footnotes

  1. Due to our application requirements, we had a central authentication service anyway, so this was not a huge deal. It was already under stricter security controls than the rest of the applications due to the sensitive nature of its data (eg, password hashes), so we felt comfortable enriching its cluster permissions.

AWS Support Pricing

· 4 min read
Hunter Fernandes
Software Engineer

If you are a tech company on the AWS cloud, buying AWS Support is one of the best things you can do for your business. It immediately gives you access to incredibly skilled professionals to fill gaps in your knowledge. While it's obvious you can reach out when you are having problems, you can reach out for guidance, too. Support has saved me weeks of work with a few deep insights into technical issues they foresaw me running into down the line. RDS? Migrations? VPNs? Exotic VPC layouts? If you are doing anything remotely interesting, then they can help and advise.

You still have options for the rare case where frontline support can't help. They are not afraid of escalating: when things aren't making sense, they engage the same engineering team that works on the service to help. What a great thing to have -- you can hear exactly from the service developers! I've had wacky permissions policies explained to me, workarounds for internal implementation issues, and you can get timelines for when they will ship a feature you want -- sometimes you can wait out your work. You can even put in feature requests. I've seen my requests turn into real features!

From a business perspective, it's also an insurance policy against things going wrong and gives you options when it does. On the business plan you can access support in at most 1 hour. If your product depends on AWS to work, this is a no-brainer. If we're having an outage, I'd rather tell my customer that I am on the phone with AWS engineers rather than telling my customer that I didn't opt into a better support plan.

Support = $$$

The issue is that support is expensive. Each AWS account you enroll in support has a cost of MAX($100, 10% of spend) 1. Did you get that? Per account! That's insane in a world where AWS has declared that the best practice is a multi-account setup. To enroll your entire multi-account organization in support, you must pay (drumroll please...) at least $5,000 every month. Just on Support.

AWS is being hypocritical: they preach the virtues of multi-account on the one hand and then make it financially painful to do so on the other.

Best Practices: Multi-Account Setup
Master Account+ $100
Identity Account+ $100
Log Archive Account+ $100
Audit Account+ $100
Security Account+ $100
Shared Services Account+ $100
Total🤑💸

They need to add an organization-wide plan that doesn't cost an arm and a leg. The Business plan needs to be made multi-account.

Weird Hack

In an effort to show how dumb AWS Support pricing is, I will show how to get around it. Ultimately, the $200 fixes cost washes away and the 10% additional cost becomes the main factor. But you can get around that!

  1. Create an AWS organization and create a single subaccount.
  2. Run all of your workloads in your sub-account.
  3. Buy all of your RIs in your master account.
  4. Enroll only your sub-account in a support plan.

Now you have a situation where the 10% bill does not apply to your RI purchases, but you will receive support for those resources in the subaccount. This is an artifact of AWS Support billing model. It's a stupid model and they should change it. If they had affordable (not $5,000+/mo!) options, it would be a braindead simple choice to just enroll the entire organization. Then this dumb hack would not work.

I hope that by showing how to avoid dumb per-account support billing, AWS will consider adding better org-wide options. It's the only way they should be billing support.

Footnotes

  1. The percentage addition to your bill steps down as your usage grows. It's 10% when you start and slowly goes down to 3% at $250,000/month.

SQS Performance (II)

· 5 min read
Hunter Fernandes
Software Engineer

This is a follow-up to my previous blog post, SQS Slow Tail Performance. That post has much more detail about the problem. For various work-related reasons, I am finally revisiting SQS performance. It's my white whale, and I think about it frequently. And I've discovered at least one thing!

HTTP Keep-Alive

In my previous post, I mentioned both that HTTP "cold-start" times and low HTTP Connection: keep-alive times could be a contributing factor. And, after some more investigating, it turns out I was sort of right. Sort of.

HTTP Keep-Alive is when the server indicates to the client that the TCP connection should be kept open after the HTTP Response is sent back. Normally (under HTTP/1), the TCP connection would be closed after the response is completed. Keeping it alive lowers the TCP/TLS overhead associated with making a request. When using keep-alive, you only pay the overhead for the first request and all subsequent requests get to piggyback on the connection.

Persistent connections are so performance-enhancing that the HTTP/1.1 specification makes them the default (§8.1.2.1)! The method by which the server indicates to the client that it wants to close the connection is by sending Connection: close. But the server can just close the connection anyways without warning.

Every 80 Messages

I ran some more tests like last time. Importantly, the difference from the testing I did in my previous post was that I executed more sqs:SendMessage calls serially. Last time I did 20. This time I did 180. It turns out that the magic number is... 🥁🥁🥁 80!

Every 80 messages, we see a definite spike in request time! To be fair, we don't only see a latency spike on the 81st message, but we always see it every 80.

And what do you know? 1-1/80 = 98.75. This is where our definite 99th percentile latency comes from! So what the heck is happening every 80 messages?

Test Setup

To test sqs:SendMessage latency, I set up an EC2 instance in order to run calls from within a VPC. This should minimize network factors. Typically, we see intra-vpc packet latency of about 40 microseconds (the best of the three largest cloud providers; study courtesy of Cockroach Labs). So any latency we see in our calls to SQS can reasonably be attributed to just SQS and not the network between my test server and SQS servers.

Here are the test parameters:

  1. Queue encryption enabled with the default AWS-provided key.
  2. The data key reuse period for encryption was 30 minutes.
  3. Message expiration of 5 minutes. We'll never consume any messages and instead, choose to let them expire.
  4. Visibility timeout of 30 seconds. This should not affect anything as we are not consuming messages.
  5. Message Payload of 1 kB of random bytes. Random so that it's not compressible.
  6. Concurrent runs capped to 16.

I ran 100 runs, each consisting of 180 serial sends each and then aggregated the results. Here are the results, plotting the p75 latency across all the calls at a given position:

newplot (1).png

Notice that every 80 messages we see a big bump in the p75 latency? We have high call times on the 1st, 81st, and 161st messages. Very suspicious!

Connection Reuse

The AWS client library I use is boto3, which also returns response headers. As it turns out, SQS is explicitly sending Connection: close to us after 80 messages being sent on a connection. SQS is deliberately closing the connection! You'll typically see this behavior in distributed systems to prevent too much load from going to one node.

However, we don't just see slow calls every 80 sends. We see slow calls all over the place, just less frequently. Here is the max() call time from each position.

newplot (2).png

This seems like evidence that the connections are being dropped even before the 80th message. Boto3 uses the urllib2 under the hood, and urllib comes with excellent logging. Let's turn it on and see what's going on!

import logging
logging.getLogger('urllib3').setLevel(logging.DEBUG)
logger = logging.getLogger()
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(logging.Formatter("[%(name)s]: %(message)s"))
logger.addHandler(console_handler)

And, after executing a single serial run we see connection resets exactly where we'd expect them.

Untitled

Unfortunately, we don't also see "silently dropped connections" (aka, TCP connections closed without a Connection: close header). urllib2 is indicating that some of the non-80 slowness is still being sent over an established connection. The mystery continues...

SQS VPC Endpoint

There was some question what kind of performance impact VPC Endpoints would have. So I set one up for SQS in my test VPC and the results were... meh. Here's a histogram of call times and some summary statistics:

newplot (4).png

Untitled

Completely meh. The VPC Endpoints runs are actually worse at the tail. You'd think that eliminating any extra network hops through an Endpoint would reduce that, but 🤷.

Here are all the test parameters I checked. You can see that the 99% remains pretty high.

Untitled

Further Work

I think this is the end of the line for SQS performance. We've made sure that everything between us and SQS is removed and yet we're still seeing random spikes of latency that can't be explained by connection reuse.

SQS's Slow Tail Latency

· 11 min read
Hunter Fernandes
Software Engineer

At my company, we use AWS Simple Queue Service (SQS) for a lot of heavy lifting. We send ✨many✨ messages. I can say that it's into the millions per day. Because we call SQS so frequently and instrument everything, we get to see its varied performance under different conditions.

amazon simple queuing service logo

Backend Background

Our application backend is written in Python using Django and Gunicorn and uses the pre-fork model.1 This means that we always need to have a Gunicorn process available and waiting to pick up a connection when it comes in. If no Gunicorn process is available to receive the connection, it just sits and hangs. This means that we need to optimize on keeping Gunicorn processes free. One of the main ways to do this is to minimize the amount of work a process must perform in order to satisfy any single API request.

But that work still needs to be done, after all. So we have a bunch of separate workers waiting to receive tasks in the background. These background workers process work that takes a long time to complete. We offload all long work from our Gunicorn workers to our background workers. But how do you get the work from your frontend Guncicorn processes to your background worker pool?

The answer is by using a message queue. Your frontend writes ("produces") jobs to the message queue. The message queue saves it (so you don't lose work) and then manages the process of routing it to a background worker (the "consumers"). The message queue we use is SQS. We use the wonderful celery project to manage the producer/consumer interface. But the performance characteristics all come from SQS.

To recap, if a Gunicorn API process needs to do any non-negligible amount of work when handling a web request, it will create a task to run in the background and send it to SQS through the sqs:SendMessage AWS API. The net effect is that the long work has been moved off the critical path and into the background. The only work remaining on the critical path is the act of sending the task to the message queue.

So as long as you keep the task-send times low, everything is great. The problem is that sometimes the task-send times are high. You pay the price for this on the critical path.

Latency at the Tail

We care a lot about our API response time: high response times lead to worse user experiences. Nobody wants to click a button in an app and get three seconds of spinners before anything happens. That sucks. It's bad U/X.

One of the most valuable measures of performance in the real world is tail latency. You might think of it as how your app has performed in the worst cases. You line up all your response times (fastest-to-slowest) and look at the worst offenders. This is very hard to visualize, so we apply one more transformation: we make a histogram. We make response time buckets and count each response time in the bucket. Now we've formed a latency distribution diagram.

Quick note for the uninitiated: pX are percentiles. Roughly, this means that pX is the Xth slowest time out of 100 samples. For example, p95 is the 95th slowest. It scales, too: p99.99 is the 9,999th slowest out of 10,000 samples. If you're more in the world of statistics, you'll often see it written with subscript such as p95.

You can use your latency distribution diagram to understand at a glance how your application performs. Here's our sqs:SendMessage latency distribution sampled over the past month. I've labeled the high-end of the distribution, which we call the tail.

example latency distribution

What's in the tail?

We have a backend-wide meeting twice a month where we go over each endpoint's performance. We generally aim for responses in less than 100 milliseconds. Some APIs are inherently more expensive, they will take longer, and that's ok. But the general target is 100ms.

One of the most useful tools for figuring out what's going on during a request is (distributed) tracing! You may have heard of this under an older and more vague name: "application performance monitoring" (APM). Our vendor of choice for this is Datadog, who provides an excellent tracing product.

For the last several performance meetings, we've had the same problem crop up. We look at the slowest 1-3% of calls for a given API endpoint and they always look the same. Here's a representative sample:

SQS is very slow.

See the big green bar at the bottom? That is all sqs:SendMessage. Of the request's 122ms response time, 95ms or 75% of the total time was waiting for SQS:SendMessage. This specific sample was the p97 for this API endpoint.

There are two insights we get from this trace:

  1. To improve the performance of this endpoint, we need to focus on SQS. Nothing else really matters.

  2. High tail latency for any of our service dependencies are going to directly drive our API performance at the tail. Of course, this makes total sense. But it's good to acknowledge it explicitly! It's futile to try to squeeze better performance out of your own service if a dependency has poor tail latency.

    The only way to improve the tail latency is to either drop the poor performer or to change the semantics of your endpoint. A third option is to attempt to alleviate the tail latencies for your service dependency if you can do so. You can do this if you own the service! But sometimes, all you have is a black box owned by a third party (like SQS).

Strategies to Alleviate Dependent Tail Latency

Dropping the Service

Well, we can't drop SQS at this time. We'll evaluate alternatives some time in the future. But for now, we're married to SQS.

Changing API Semantics

An example of changing API semantics: instead of performing the blocking SendMessage operation on the main API handler thread, you might spin the call out to its own dedicated thread. Then, you might check on the send status before the API request finishes.

The semantic "twist" happens when you consider what happens in the case of a 5, 10, or even 20 second SendMessage call time. What does the API thread do when it's done handling the API request but the SendMessage operation still hasn't yet been completed? Do you just... skip waiting for the send to complete and move on? If so, your semantics have changed: you can no longer guarantee that you durably saved the task. Your task may never get run because it never got sent.

For some exceedingly rare endpoints, that's acceptable behavior. For most, it's not.

There is yet another point hidden in my example. Instead of just not caring whether the task was sent, say instead we block the API response until the task send completes (off thread). Basically, we get the amount of time we spend doing other non-SQS things shaved off of the SQS time. But we still have to pay for everything over the other useful time. In this case, we've only improved perhaps the p80-p95 of our endpoint. But the worst-case p95+ will still stay the same! If our goal in making that change was to reduce the p95+ latency then we would have failed.

Alleviating Service Dependency Tail Latency

SQS has a few knobs we can turn to see if they help with latency. Really, these knobs are other SQS features, but perhaps we can find some correlation to what causes latency spikes and work around them. Furthermore, we can use our knowledge of the transport stack to make a few educated guesses.

Since SQS is a black box to us, this is very much spooky action at a distance.

SQS Latency Theories

So why is SQS slow? Who knows? Well, I guess the AWS team but they aren't telling.

I've tested several hypotheses and none of them give a solid answer for why we sometimes see 5+ second calls.

I've documented my testing here for other folks on the internet with the same issue. I've looked at:

  1. Cold-start connections
  2. Increasing HTTP Keep-Alive times
  3. Queue encryption key fetching
  4. SQS rebalancing at inflection points

Cold-start connections

How about initiating new connections to the SQS service before keep-alive kicks in on boto3. If our process has never talked to SQS before, it stands to reason that there is some initial cost in establishing a connection to it in terms of TCP setup, TLS handshaking, and perhaps some IAM controls on AWS' side.

To test this, we ran a test that cold-started an SQS connection, sent the first message, and then sent several messages afterwards when keep-alive was active. Indeed, we found that the p95 of the first SendMessage was 87ms and the p95 of the following 19 calls was 6ms.

It sure seems like cold-starts are the issue. Keep-alive should fix this issue.

A confounding variable is that sometimes we see slow SendMessage calls even in the next 20 operations following an initial message. If the slow call was just the first call, we could probably work around it. But it's not just the first one.

Increasing Keep-Alive Times

In boto, you can set the maximum amount of time that urllib will keep a connection around without closing it. It should really only affect idle connections, however.

We cranked this up to 1 hour and there was no effect on send times.

Queue Encryption Key Fetching

Our queues are encrypted with AWS KMS keys. Perhaps this adds jitter?

Our testing found KMS queue encryption does not have an effect on SendMessage calls. It did not matter if the data key reuse period was 1 minute or 12 hours.

Enabling encryption increased the p95 of SendMessage by 6ms. From 12ms without encryption to 18ms with encryption. That's pretty far away from the magic number of 100ms we are looking for.

SQS Rebalancing at Inflection Points

Perhaps some infrastructure process happening behind the scenes when we scale up/down and send more (or fewer) messages throughout the day. If so, we might see performance degradation when we are transitioning along the edges of the scaling step function. Perhaps SQS is rebalancing partitions in the background.

If this were to be the case, then we would see high latency spikes sporadically during certain periods throughout the day. We'd see slow calls happen in bursts. I could not find any evidence of this from our metrics.

Instead, the slow calls are spread out throughout the day. No spikes.

So it's probably not scaling inflection points.

AWS Support

After testing all of these theories and not getting decent results, we asked AWS Support what the expected SendMessage latency is. They responded:

[The t]ypical latencies for SendMessage, ReceiveMessage, and DeleteMessage API requests are in the tens or low hundreds of milliseconds.

What we consider slow, they consider acceptable by design.

Here's an output characteristic curve from IRLB8721PbF MOSFET (pdf).

Hardware has such nice spec sheets.

Frankly, I consider this answer kind of a cop-out from AWS. For being a critical service to many projects, the published performance numbers are too vague. I had a brief foray into hardware last year. One of the best aspects of the hardware space is that every component has a very detailed spec sheet. You get characteristic curves for all kinds of conditions. In software, it's just a crapshoot. Services aiming to underpin your own service should be publishing spec sheets.

Normally, finding out the designed performance characteristics are suboptimal would be the end of the road. We'd start looking at other message queues. But we are tied to SQS right now. We know that the p90 for SendMessage is 40ms so we know latencies lower than 100ms are possible.

What Now?

We don't have a great solution to increase SQS performance. Our best lead is cold-start connection times. But we have tweaked all the configuration that is available to us and we still do not see improved tail latency.

If we want to improve our endpoint tail latency, we'll probably have to replace SQS with another message queue.

Footnotes

  1. There are many problems with the pre-fork model. For example: accept4() thundering herd. See Rachel Kroll's fantastic (and opinionated) post about it. Currently, operational simplicity outweighs the downsides for us.

Athena 2 Cloudtrail "HIVE_BAD_DATA: Line too long"

· 3 min read
Hunter Fernandes
Software Engineer

Amazon recently announced the general availability of Athena 2, which contains a bunch of performance improvements and features.

As part of our release process, we query all of our Cloudtrail logs to ensure that no secrets were modified unexpectedly. But Cloudtrail has hundreds of thousands of tiny JSON files, and querying them with Athena takes forever. This is because under the hood Athena has to fetch each file from S3. This takes 20-30 minutes to run, and hurts the developer experience.

Worse than taking forever, it frequently throws an error of Query exhausted resources at this scale factor. The documentation suggests this is because our query uses more resources than planned. While you can typically get this error to go away if you run the same query a few more times, you only encounter the error after 15 minutes. It's a huge waste of time.

To fix this, every night we combine all tiny Cloudtrail files into a single large file. This file is about 900 MB of raw data but compresses down to only 60 MB. We instead build our Athena schema over these compressed daily Cloudtrail files and query them instead. This reduces the query time to only 3-4 minutes or so.

This worked great on Athena 1. But on Athena 2, we started seeing errors like this:

Your query has the following error(s):
HIVE_BAD_DATA: Line too long in text file: s3://xxx/rollup/dt=20190622/data.json.gz
This query ran against the "default" database, unless qualified by the query.
Please post the error message on our forum or contact customer support with Query Id: aaa8d916-xxxx-yyyy-zzzz-000000000000.

Contrary to the error message, none of the lines in the file are too long. They are at most about 2kb. There seems to be a bug in the AWS-provided Cloudtrail parser that treats the whole file as a single line which violates some hidden cap on line-length.

Some sleuthing of the Presto source code (which Athena is based on) shows that there is a default line length of 100 MB. Now, we split the consolidated Cloudtrail log into 100 MB chunks and query those instead.

This works out fine. But it's a pain and a waste of time to do this.

Athena has a cap on the total number of partitions you can have in a table. We used to consume only one partition per day, but this change ups it to 9 per day (and growing with data growth). Since the cap is 20,000, we're still well within quota.

I'm hoping that AWS will fix this bug soon. Everything about it is needlessly annoying.

MySQL Proxies

· 5 min read
Hunter Fernandes
Software Engineer

Our Django application connects to MySQL to store all of its data. This is a pretty typical setup. However, you may not know that establishing connections to MySQL is rather expensive.

MySQL Connections

I have written previously about controlling API response times -- we take it very seriously! One thing that blows up response latencies at higher percentiles is when our application has to establish a new MySQL connection. This tacks on an additional 30-60ms of response time. This cost comes from the MySQL server-side -- connections are internally expensive and require allocating buffers.

The network does not significantly contribute to the setup cost. Go ahead and set up a local MySQL server and take connection timings. You will still see 30ish milliseconds even over loopback! Even AWS' own connection guide for debugging MySQL packets shows the handshake taking 40ms!

AWS Mysql Slow connection time

We have two other wrinkles:

  1. Django adds some preamble queries, so that adds further to connection setup costs.
  2. We need to be able to gracefully (and quickly) recover from database failover. In practice, this means that we need to re-resolve the database hostname every time we connect. RDS DNS resolution is slow. I have the receipts! These instrumented numbers factor in cache hits.

RDS proxy connection times

All of this is to say that you want to reduce the number of times that you have to establish a MySQL connection.

The most straightforward way to do this is to reuse connections. And this actually works great up to a certain point. However, there are two quirks:

First, connections have server-side state that is expensive to maintain. A connection may take up a megabyte of ram on the database even if the connection is doing nothing. This number is highly sensitive to database configuration parameters. This cost is multiplied by thousands of connections.

Second, we have API processes and background processes that often times are doing nothing. Due to the nature of our workload, many of our services are busy and times when other services are not. In aggregate, we have a nice load pattern. But each particular service has a spiky load pattern. If we keep connections open forever, we are hogging database resources for connections that are not being used.

We have here a classic engineering tradeoff! Do we keep the connections open forever and hog database resources to ultimately minimize database connection attempts?

Short LifetimesLong Lifetimes
Least idle waste ✅Most idle waste
Frequent reconnectsMinimal reconnects ✅
Higher p95Lower p95 ✅

Database Proxies

But we want low idle waste while also having minimal reconnects and a lower p95. What do we do?

The answer is to use a database proxy. Instead of your clients connecting directly to the database server, the clients makes a "frontend connection" to the proxy. The proxy then establishes a matching "backend connection" to the real database server. Proxies (generally) talk MySQL wire protocol, so as far as your application code is concerned nothing has changed.

When your client is longer actively using the MySQL connection, the proxy will mark the backend connection as inactive. Next time a client wants to connect to the database via the proxy, the proxy will simply reuse the existing MySQL connection instead of creating a new one.

Thus, two (or more) client MySQL connections can be multiplexed onto a single real backend connection. The effect of this is that

  1. The number of connections to the MySQL database is cut down significantly, and
  2. Clients connecting to the proxy are serviced very quickly. Clients don't have to wait for the slow backend connection setup.

The first wrinkle is that if both frontend connections want to talk to the database at the same time then the proxy has to either

  1. Wait for one frontend connection to become inactive before servicing the other (introducing wait time), or
  2. Spin up another backend connection so that both frontend connections can be serviced at the same time, which makes you still pay the connection setup price as well as the backend state price.

What the proxy actually does in this case depends on the configuration.

A second wrinkle occurs due to the highly-stateful nature of MySQL connections. There is a lot of backend state for a connection. The proxy needs to know about this state as well, or is it could errantly multiplex a frontend connection onto a backend connection where the expected states are misaligned. This is a fast way to get big issues.

To solve this, proxies track the state of each frontend and backend connection. When the proxy detects that a connection has done something to affect the state that tightly bounds the frontend to the backend connection, the proxy will "pin" the frontend to the backend and prevent backend reuse.

RDS Proxy

There are a few MySQL database proxies that are big in the FOSS ecosystem right now. The top two are sqlproxy (which is relatively new), and Vitess (where the proxy ability is just small part in a much larger project). Running a proxy yourself is adding more custom infrastructure that comes with its own headaches though. A managed/vendor-hosted version is better at our scale.

And, what do you know? AWS just released RDS Proxy with support for Aurora MySQL. I tried it out and found it... wanting.

Look for my experience with RDS proxy in a post coming soon!

AWS Global Accelerator

· 4 min read
Hunter Fernandes
Software Engineer

A few months ago AWS introduced a new service called Global Accelerator.

AWS Cloudfront PoPs around the United States.

This service is designed to accept traffic at local edge locations (maybe the same Cloudfront PoPs?) and then route it over the AWS backbone to your service region. An interesting feature is that it does edge TCP termination, which can save latency on quite a few packet round trips.

Bear in mind, that after the TCP handshake, the TLS handshake is still required and that requires a round trip to the us-west-2 (Oregon) region regardless of the edge location used by Global Accelerator.

Performance Results

Of course I am a sucker for these "free" latency improvements, so I decided to give it a try and set it up on our staging environment. I asked a few coworkers around the United States to run some tests and here are the savings:

LocationTCP ShakeTLS HandshakeWeighted Savings
Hunter @ San Francisco-42 ms-1 ms-43 ms
Kevin @ Iowa-63 ms-43 ms-131 ms
Matt @ Kentucky-58 ms-27 ms-98 ms
Sajid @ Texas-50 ms+57 ms-14 ms

My own entry from San Francisco makes total sense. We see a savings on the time to set up the TCP connection because instead of having to roundtrip to Oregon, the connection can be set up in San Jose. It also makes sense that the TLS Handshake did not see any savings, as that still has to go to Oregon. The path from SF Bay Area to Oregon is pretty good, so there is not a lot of savings to be had there.

However, for Iowa and Kentucky, the savings are quite significant. This is because instead of transiting over the public internet, the traffic is now going over the AWS backbone.

Here's a traceroute from Iowa comparing the public internet to using Global Accelerator.

  • Green is with Global Accelerator.
  • Red is without Global Accelerator using the public internet.

Traceroute from Iowa

You can see that the path is much shorter and more direct with Global Accelerator. Honestly, me using Iowa as a comparison here is a bit of a cheat, as you can see from the AWS PoP / Backbone map that there is a direct line from Iowa to Oregon.

But that is kind of the point? AWS is incentivized for performance reasons to create PoPs in places with lots of people. AWS is incentivized to build our their backbone to their own PoPs. Our customers are likely to be in placed with lots of people. Therefore Global Accelerator lets us reach our customers more directly and AWS is incentivized keep building that network out.

Where it gets weird is Texas. The TCP handshake is faster, but the TLS handshake is slower. I am not sure why this is. In fact, I checked with other coworkers from different areas in Texas and they had better results.

Production

I was happy with the results and decided to roll it out to production while keeping an eye on the metrics from Texas. We rolled it out to 5% of our traffic and everything seemed to be going well, so we rolled it out to 20% then 100% of our traffic.

We observed a 17% reduction in latency across the board and a 38% reduction in the 99th percentile latency. That is an amazing improvement for a service that is just a few clicks to set up.

I am pleased to say the data from Texas has improved as well. While I am not sure what the issue was, it seems to have resolved itself. Hopefully AWS will release some better network observation tools in the future to aid debugging these issues.

AWS Cognito Limitations

· 6 min read
Hunter Fernandes
Software Engineer

When we were initially rolling out user accounts, we decided to go with AWS Cognito. It has been incredibly frustrating to use and I need to rant about it.

tl;dr Don't use Cognito.

Cognito & JWTs

AWS released Cognito in 2014, and its goal is to serve as the authentication backend for the system you write. You integrate with Cognito by setting up a Cognito User Pool and by accepting user tokens signed by Amazon.

At a high level, the user authenticates against Cognito and receives three JSON Web Tokens that are signed by Amazon:

  1. an Access token that holds very few claims about an account. Essentially just a User ID. This is supplied to our API to prove it's you.
  2. an ID token which contains all attributes for an account. Think email, phone, name, etc.
  3. a Refresh token to get more access tokens once they expire.

When users call one of our APIs, they supply the access token in the Authorization header. It looks something like this:

GET /identity/v1/users/$me/ HTTP/1.1
Host: api.carium.com
Authorization: Bearer the-very-long-access-token-goes-here

On seeing this Authorization header, our API verifies the token is signed (correctly) by Amazon and that the token is not expired among other things. If the signatures match and other criteria are met, then we know that you are you and we can give you privileged access to your account!

But access tokens only last for an hour. We don't want the user to have to log in every hour. They continue to use the authentication after an hour by using the refresh token to acquire another access token. This is done via a Cognito API. Therefore, the client will be refreshing the access token every hour for the duration of the session. (If the client misses a refresh period that's fine. There is no continued-refresh requirement).

That is the gist of JWTs. Now, back to Cognito.

The Good

Why did we go with Cognito in the first place?

  1. It's not our core competency. Using Cognito allows us to offload complex identity management to a team of experts that live and breathe identity. And because it's their core focus, you get cool things like...

  2. Secure Remote Password (SRP) protocol. Instead of us offloading password-derivatives onto AWS, with SRP even AWS doesn't know the password! With SRP, the actual password is never transferred to a remote server. That is super cool.

  3. Handles registration for us. Cognito will take care of verifying email addresses and even phone numbers so that you don't need to implement that flow.

The Bad

Now that I've listed all the reasons we started going with Cognito, here are all the pain points we've felt along the way. Some of them are very painful.

  1. No custom JWT fields. Cognito does not allow you to store custom fields on the access tokens. We have some information that we really want to stick on the stateless tokens.

  2. Cognito doesn't let us issue tokens for a user without their password. I will be the first to admit that this is really nice from a security perspective. But we are a healthcare app and we need to be able to help our users when they are in technical trouble.

    One of the most powerful tools we can give to our support staff is impersonating patients to see the issue they are seeing. We can't do that without either a) issuing tokens for the user or by b) adding a custom field on JWTs (along with some logic in our apps to recognize this new fields). But neither is possible in Cognito!

  3. There is no easy email templates differentiation between events like verify email and forgot password. You have to give them just a single giant email template file and have a bunch of crazy if-else blocks in it to render parts differently based on event parameters. This should be a lot easier.

  4. No multipart email support. That means your emails (for sign up, resetting a password, etc) can either be plaintext or html, but not both. If you want to include links in your mail then simple email clients won't be able to render your message at all (they would normally render the plaintext version).

  5. cognito:GetUser has a permanently-low rate limit. That's right, the only API that will give you all user attributes (as well as verifying the token) can't be called that often. And it's a hard limit, too. It does not scale up as the number of users in your pool increases (confirmed with AWS Support).

    What this means is that you have to build your own storage to mirror the attributes in the pool. If you have to do that, then why are you using Cognito at all?

  6. Cognito does not allow passwords with spaces. Yes, really. That means the old "correct horse battery staple" advice is not allowed. That is insane. And further than that, it's an insane requirement that we need to justify to our customers. We are not a bank from the 90s but that's the first impression our users get of us.

    banned correct horse battery staple.

  7. No SAML integration. As a business that interfaces with large health systems, we will need to support SAML at some point in the future. Cognito does not support this at all.

  8. And the biggest one of all: no ability to backup a user pool. If you accidentally delete your user pool or errant code goes rogue, then your business is over. You don't get to take backups. That's it. Done. You can't even move users from one pool to another (I expect this has something to do with SRP keys).

    As a workaround, you can collect all attributes of your users and store that list somewhere as a crap backup. But you can't automatically create restored users in a new pool (they need to verify their email first). Furthermore, your list would still require users to reset their password because Cognito cannot give you a hash (or, instead, their secret half of the SRP).

    All of the other problems with Cognito make me annoyed, but an inability to backup my user list legitimately terrifies me.

The Ugly

So due to Cognito limitations, we will have to implement our own user store and authentication service.

I think we ultimately failed because we tried to bend Cognito to fit our needs and Cognito is not designed for that. Cognito demands that your app bend to Cognito's auth flows. That's fine for mobile app du jour, but it just doesn't work for the enterprise software half of our business.

Writing an authentication backend is hard, the risks are high, and the user migration will be long.

Oh well.