Coinciding with AWS Pi Week, Amazon Web Services (AWS) released and discussed S3 Object Lambda Access Points. Object Lambda Access Points introduce a layer of compute within Amazon Simple Storage Service (S3), their highly popular file storage service. An application or a user can interact with the access point in the same way as a bucket, with the added functionality of an AWS Lambda function that can trigger events, modify the response, and more depending on a Lambda function’s logic. Before this, there was no compute layer behind S3 - you had to develop your own solution to manipulate any data provided by S3 to an end-user or application. S3 Object Lambda is an exciting new feature – and it does not replace the original S3 bucket endpoints, mitigating any disruption risk of processes that rely on traditional S3 workflows.
The immediate focus of this new functionality is in the context of redaction, log filtering, and other security and privacy use cases which drew our attention and excitement as security professionals. AWS has said, including in the Pi Week session, that it can be used to modify data in any way including enrichment, but demonstrations so far have mostly circulated around redacting or authorizing data access, slimming down logs, and very rudimentary use cases such as changing character case.
The public discussion and focus on the restriction and redaction of data is quite reasonable for the security industry, however, this blog demonstrates a less discussed use case: transparent log enrichment. In other words, optionally increasing log volume rather than decreasing it. Our goal is to draw attention from the community to this feature, with the hope people continue to discuss and expand their use case(s) of this new powerful feature. To their credit, AWS calls this out in their initial announcement as "augmenting data with information from other services or databases.”
In this example, we’ll focus mostly on enriching the source IP addresses within log events. We’ll be geolocating and threat scoring from behind the S3 layer using S3 Object Lambdas on a simple CloudTrail log. We’ll then cache those lookups to an S3 bucket for both performance and API limits. A proper cache such as Elasticache or something else is likely more appropriate here, but S3 was chosen for it’s wide familiarity and simplicity. This process should be usable for other log formats with limited modification. What’s the end result? A new endpoint (an Object Lambda Access Point) similar to an S3 bucket endpoint from which to download your CloudTrail events that now provide additional inline context such as the IP’s abuse reputation and location.
To do this, and most Object Lambda scenarios, you need:
- An S3 bucket to analyze (in this case CloudTrail data).
- An S3 Access Point, found in the S3 console under “Access Points,” that points to the original S3 bucket.
- An IAM Role that gives both the usual Lambda execution permissions (CloudWatch log write, etc), and the s3-object-lambda:WriteGetObjectResponse action (with the resource being your access point). In this example, we also grant Put and Write to an S3 bucket.
- A Lambda function that pulls from the Object Lambda Access Point and returns the transformed object. See AWS’s blog for more information.
- An S3 Object Lambda Access Point (yes, you need both an OLAP and an AP), found directly below Access Points, which points to the Access Point in step 2.
The following APIs were used:
In addition, we provide a small helper function to weigh the event context against riskier actions. This functionality is to show and prove the extent of possible functionality beyond IP lookups.
The example code for the Lambda function is available here. Developers are encouraged to modify the snippet to their desire as this exists largely as a proof of concept. It expects several environment variables: AF_API_ENABLED, AF_API_KEY, IPQS_API_ENABLED, IPQS_API_KEY, IPSTACK_API_ENABLED, IPSTACK_API_KEY, NATIVE_RISK_ENABLED, INTEL_BUCKET. They’re generally separated as: feature flags to enable certain lookups, the API keys, and the writable S3 bucket to cache lookups. Feature flags are important as some APIs perform better than others, duplicate one another’s work, etc. Bundle it with the following libraries: requests, latest boto3, and autofocus-client-library if using Autofocus.
Consider this sample event:
{
"eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "AHHHHHHHHHHHHH", "arn": "arn:aws:iam::123456789123:user/very_evil_user", "accountId": "123456789123", "accessKeyId": "AHHHHHHHHHHHHH", "userName": "very_evil_user" }, "eventTime": "2021-04-01T00:17:04Z", "eventSource": "s3.amazonaws.com", "eventName": "ListBuckets", "awsRegion": "us-east-1", "sourceIPAddress": "193.169.255.236", "userAgent": "[mean-pew-pew-weapon]", "requestParameters": { "Host": "s3.amazonaws.com" }, "responseElements": null, "additionalEventData": { "SignatureVersion": "SigV4", "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "bytesTransferredIn": 0, "AuthenticationMethod": "AuthHeader", "x-amz-id-2": "ucwSj7oKFXUjN2p9Mxa9P2e+cq7vNg16fYYan1m01XyWGnRow3/lZuVePHj2aVT6YNOeE82NNSvdXjW0CRXCC/TqMcNAGw==", "bytesTransferredOut": 79940 }, "requestID": "AHHHHHHHHHHHHH", "eventID": "a6a81d51-8397-478a-88af-b283a5ed0ef4", "readOnly": true, "eventType": "AwsApiCall", "managementEvent": true, "eventCategory": "Management", "recipientAccountId": "123456789123" } |
After passing through our function, it appears as such, giving us a significantly larger amount of context. We’ve bolded and marked red some of the more fun fields.
{
"eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "AHHHHHHHHHHHHH", "arn": "arn:aws:iam::123456789123:user/very_evil_user", "accountId": "123456789123", "accessKeyId": "AHHHHHHHHHHHHH", "userName": "very_evil_user" }, "eventTime": "2021-04-01T00:17:04Z", "eventSource": "s3.amazonaws.com", "eventName": "ListBuckets", "awsRegion": "us-east-1", "sourceIPAddress": "193.169.255.236", "userAgent": "[mean-pew-pew-weapon]", "requestParameters": { "Host": "s3.amazonaws.com" }, "responseElements": null, "additionalEventData": { "SignatureVersion": "SigV4", "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "bytesTransferredIn": 0, "AuthenticationMethod": "AuthHeader", "x-amz-id-2": "ucwSj7oKFXUjN2p9Mxa9P2e+cq7vNg16fYYan1m01XyWGnRow3/lZuVePHj2aVT6YNOeE82NNSvdXjW0CRXCC/TqMcNAGw==", "bytesTransferredOut": 79940 }, "requestID": "AHHHHHHHHHHHHH", "eventID": "a6a81d51-8397-478a-88af-b283a5ed0ef4", "readOnly": true, "eventType": "AwsApiCall", "managementEvent": true, "eventCategory": "Management", "recipientAccountId": "123456789123", "ipstack": { "ip": "193.169.255.236", "type": "ipv4", "continent_code": "EU", "continent_name": "Europe", "country_code": "PL", "country_name": "Poland", "region_code": "PM", "region_name": "Pomerania", "city": "Słupsk", "zip": "76-251", "latitude": 54.34197998046875, "longitude": 17.093599319458008, "location": { "geoname_id": 3085450, "capital": "Warsaw", "languages": [ { "code": "pl", "name": "Polish", "native": "Polski" } ], "country_flag": "http://assets.ipstack.com/flags/pl.svg", "country_flag_emoji": "🇵🇱", "country_flag_emoji_unicode": "U+1F1F5 U+1F1F1", "calling_code": "48", "is_eu": true } }, "fraud_score": 50, "autofocus": { "first_seen": null, "last_seen": null, "seen_by": [], "wildfire_verdict": null, "pandb_verdict": "malware", "whois": "{\"admin_country\": null, \"admin_email\": null, \"admin_name\": null, \"domain_creation_date\": null, \"domain_expiration_date\": null, \"domain_updated_date\": null, \"registrar\": null, \"registrar_url\": null, \"registrant\": null}" }, "ipqualityscore": { "success": true, "message": "Success", "fraud_score": 100, "country_code": "PL", "region": "Wielkopolskie", "city": "Kobylnica", "ISP": "GigaHostingServices OU", "ASN": 213010, "organization": "GigaHostingServices OU", "latitude": 54.43, "longitude": 17, "is_crawler": false, "timezone": "Europe/Warsaw", "mobile": false, "host": "193.169.255.236", "proxy": true, "vpn": true, "tor": false, "active_vpn": true, "active_tor": false, "recent_abuse": true, "bot_status": true, "connection_type": "Premium required.", "abuse_velocity": "Premium required.", "request_id": "4DpKSoGlYE4EuFc" } |
This is especially useful in scenarios like DFIR where the investigation/response team may not have had the proper SIEM and aggregation at the time of initial log aggregation but want it retroactively. This also has the added bonus of not enriching the data in-place, maintaining it’s forensic integrity.
As is common anytime new features like this are released into the ecosystem, support for higher-level tools (especially third party) that typically interact with buckets is limited for interacting with Object Lambda Access Points. Boto3, the most popular AWS SDK library, supports it, but you’d need the latest version. At the time of writing this, boto3 is up to 1.17.46. As described in AWS’s original announcement, common S3 commands like “aws s3 cp” don’t yet support OLAPs but “aws s3api get-object,” built directly on the S3 JSON API models, would. Our expectation is the ecosystem will continue to improve support for Object Lambda Access Points over time as OLAP adoption increases, as is typical for new features in the cloud with third-party tool vendors.
So for instance, to retrieve this record, we did:
import boto3
import io
s3 = boto3.client('s3')
response = s3.get_object(Bucket='arn:aws:s3-object-lambda:us-east-1:123456789123:accesspoint/crypsis-cloudtrail-demo-enrichment-compute',Key='AWSLogs/o-hmbmrgi7mc/123456789123/CloudTrail/us-east-1/2021/04/01/123456789123_CloudTrail_us-east-1_20210401T001704Z_jjnmaj1A31lajcjr1f.json.gz')
with io.FileIO('data.gz', 'w') as file:
for b in response["Body"]._raw_stream:
file.write(b)
We’re excited for the capabilities and other user cases this compute layer provides, as well as how it evolves S3 as a service offering.
Appendix
Recommendations:
- Error recovery, etc is lacking in the snippet provided. Consider adding deliberate error logic and logging.
- Use all of your usual AWS services when developing in Lambda: whether it’s Cloudwatch Logs to get Lambda logs, X-Ray for tracing, or anything else.
- Modify your cache to your liking: not only to expand beyond an S3 bucket, but if you remain in S3 to set lifecycle rules to expire older hits and ensure up-to-date data.
Please note a few minor considerations that either AWS has repeated or we ran into in testing:
Cost/performance considerations:
- There are the standard data transfer and (more notably for things like CloudTrail) request costs.
- The Lambda compute (memory and runtime) used to handle these operations, while often cheap, is also factored into cost.
- Lambda will introduce additional overhead latency. It could take a second or longer to download a log if the Lambda, depending on the runtime of the Lambda function, so it may make sense to be targeted in one’s approach In the case of CloudTrail, this may look like downloading specific days and/or regions rather than widely across a whole tenant. This example involves us making up to three API calls to external providers per download, which can be intensive. The comparison however should not be the download speed before versus after, but rather the cost (and the stability/simplicity in which a serverless platform often wins) to enrich, retain and process these logs outside of this scenario.
Development considerations:
- Don’t forget the WriteGetObjectResponse IAM rights. It is even included in the IAM Visual Editor. This is called out in AWS’s blog.
- It is critical to factor in that S3 Object Lambdas uses Content-Encoding in the object metadata to transparently decompress the gzipped objects. If you were to just upload a new file, without metadata, it would fail to process in the Lambda function as the function would expect a decompressed object.