Ep. 134 Bonus Ep. 5 | The 2025 Update: What's New in AWS SAA-C03 and Why It Matters
Kelly 0:00
Welcome to the deep dive. Today, we're doing something a bit special. We're cutting through all the noise around the AWS Certified Solutions Architect Associate exam. You know the SAA-C03. That's right, especially for you if you're a mid level cloud engineer. Well, staying current isn't just nice to have, it's it's pretty essential these days, absolutely, things move fast. So our mission for this deep dive give you that strategic shortcut. We want you to really understand the critical updates the architectural thinking needed for the 2025 exam, make sure you're not just informed, but actually ready Exactly.
Chris 0:36
And this isn't just about listing new features. We're going to unpack the implications. You know what these updates mean for for real world building, and maybe most importantly, for many of you listening how they show up in exam questions, right? The goal is you walk away knowing not just what's different, but why it matters for your next design, and yeah, for passing SAA-C03.
Kelly 0:56
Okay, let's, let's unpack that. Then imagine you're starting fresh building some big, important application on AWS. What are those like foundational things you absolutely have to get right first? Yeah, because that's what SAA-C03 tests, right? And it seems like the focus is shifting a bit. So what are those foundational areas that are maybe even more critical now? Yeah, it's good
Chris 1:17
question. The fascinating thing about SAA-C03 I think, is that it demands this really broad architectural view. It's it's less about knowing, you know the exact CLI command for something
Kelly 1:31
right, not the nitty gritty implementation detail exactly. It's
Chris 1:34
more about what solution fits the business need you're always designing for those core pillars, security, resilience, performance and cost optimization. That's the constant balancing act, and the exam format reflects that. It does you've got your standard multiple choice, but also those multiple response questions where you have to pick, say two correct answers out of five. That really tests if you grasp the nuances and trade offs not just surface level knowledge.
Kelly 1:57
So just recognising a service name isn't enough. You need to know its role, its place in the bigger picture,
Chris 2:03
precisely. And the Exam Guide lays this out in domains that well they directly map to modern AWS practices take Domain One design, secure architectures, that's huge. So digging into secure access, IAM MFA, which is multi factor authentication, obviously crucial. Yeah, definitely need that extra layer and things like service control policies, SEPs, which act like, like guardrails across your AWS accounts, plus securing the actual workloads with VPC tools, Cognito for users, shield for DDoS, Ws, the whole security stack and
Kelly 2:37
resilience must be right up there too, especially for anything critical. Oh, absolutely
Chris 2:40
domain two, design resilient architectures, is often the heaviest part, high availability, fault tolerance, disaster recovery, making sure things don't fall over. Then you have domain three, design high performing architectures, optimising compute, storage, network, data flow, all
Kelly 2:55
for speed and the one everyone feels cost always. Domain four,
Chris 3:00
design, cost optimised architectures that touches everything. S3, classes, EC, two types, databases, you name it, and the last one, Duane, five. Design, operationally excellent architectures, is smaller, but still important for monitoring, logging operations.
Kelly 3:14
Okay, those are the core domains, but you mentioned shifts. What else is getting more focused now for mid level folks, yeah,
Chris 3:20
we're definitely seeing more emphasis on a few key categories. Application integration is a big one. Services for decoupling, things like SQS, message queues, API Gateway, serverless patterns makes sense with microservices and all that. Exactly then analytics and machine learning, data warehousing, data lakes, ml services for pulling insights, mobile services too, like MBs, platforms and, of course, migration and Transfer Tools, because getting stuff into AWS is often the first step
Kelly 3:46
that really paints the picture. You mentioned application integration and ml, what's like the biggest mistake or gotcha engineers run into when they start using those newer, maybe serverless, heavy services?
Chris 3:58
You know, it often boils down to really understanding the trade offs not just the shiny benefits with serverless, like Lambda, that cold start thing, ah, yeah, the delay can be a killer for anything needing super low latency, yeah. Or with integration, people might say they're decoupling, but still build things tightly coupled, not really using SQS queues properly for true asynchronous resilience. The exam loves to probe those kinds of architectural choices.
Kelly 4:25
Got it? Okay? So we've got the blueprint. Now. Let's zoom in on some of the building blocks, the specific services. Like you said, it's knowing which brick to use where. Let's start with maybe the most fundamental storage service, Amazon,
Chris 4:39
S3 right? S3 its core strength is just how versatile it is. Object based, serverless, scales massively. Buckets. Have those global unique names, sure, but the real architectural power comes from the storage classes. They're cost saving Exactly. It's a whole spectrum you've got standard for frequent access down to Glacier deep archive for well, super cheap long term storage for one. Waiting 12 hours for retrieval is okay. And in between, you have things like intelligent tiering, which is pretty smart, uses ML to move your data automatically between tiers based on how you access it.
Kelly 5:10
So the architect's job is picking the right balance of cost, access speed, durability,
Chris 5:15
precisely and access control. Bucket policies are generally the way to go over ACLs now, and remember, buckets are private by default, yeah, then you have key features like versioning a lifesaver for accidental deletes. Oh, yeah, lifecycle management for automating transitions or deletions as three. Transfer acceleration, using the edge network for faster uploads and even pre signed URLs for giving temporary access to private stuff.
Kelly 5:38
Wow, okay, S3 is way more than just dumping files, all right, moving from storing data to like the actual network Foundation, Amazon, VPC, virtual private cloud.
Chris 5:49
Yeah, VPC is basically your own private slice of the AWS network total control. Key bits are the Internet Gateway to get out to the public Internet, the Virtual Private gateway for VPNs back to your own data centre. Got route tables to direct traffic, and then firewalls. You've got network ACLs at the subnet level, which can have deny rules and security groups at the instance level, which are allow only two layers of defence.
Kelly 6:13
Okay. Question. Then my EC2 instance is in a private subnet, no direct Internet access, but it needs to say, download software updates. How does it do that securely?
Chris 6:24
Ah, perfect scenario for a NAT gateway. It lets instances in private subnets talk out to the internet, but nothing from the internet can initiate a connection in and definitely use NAT gateways now, not the old NAT instances. Gateways are managed. Scale, better, more reliable.
Kelly 6:39
Managed is always nice, okay, but what if my private instance needs to talk to, say, S3 or DynamoDB without going out to the public internet at all? Keep it all inside AWS. That's
Chris 6:49
exactly what VPC endpoints are for. Keeps traffic on the AWS backbone. Two main types gateway endpoints. These are free work for S3 and DynamoDB, and you just add a target in your route table. Simple. Then there are interface endpoints. These use AWS private link. They cost money, but support way more services. They actually put a network interface, an ENI with a private IP, right into your VPC.
Kelly 7:10
Got it so more flexible, but cost something. And for connecting back to on prem, you've
Chris 7:15
got Direct Connect for that dedicated, private, super fast link if you need serious bandwidth or standard VPN connections over the internet still secure. And don't forget VPC flow logs for watching the traffic, troubleshooting invaluable. You could send those logs to CloudWatch or S3
Kelly 7:29
okay, from that solid network base with VPC, let's talk compute, the engines running inside those networks. EC, two, elastic compute, cloud, this is where the apps run. What's key for architects here, especially instance types and
Chris 7:42
pricing right EC2, choosing the right instance type is job one for performance. You've got general purpose compute, optimised memory, optimised accelerated for GPUs, FPGAs, storage, optimised, pick the tool for the job, mix up. Then pricing huge for cost optimization. On Demand is flexible. Pay As You Go. Reserved instances, our eyes give big discounts if you commit long term. Spot Instances are amazing. Use AWS support capacity for up to 90% off. But spot can be interrupted, right? That's the catch. So great for workloads that can handle interruptions, like batch processing or some web fleets, not for your critical database and then dedicated hosts, most expensive single tenant hardware, usually for specific licencing or compliance reasons. And placement I've heard of placement groups. Yeah. Placement groups let you influence where your instances physically run. A cluster group packs them close together in one availability zone for super low latency. A spread group does the opposite. Separates critical instances across different hardware racks, even different AZs for maximum
Kelly 8:44
fault tolerance. Okay, and quickly, user data and metadata. User Data
Chris 8:48
is a script you can pass in at launch time, great for initial setup, installing software. Metadata is info about the instance itself that the instance can query at runtime, like its own IP address or instance ID. Gotcha
Kelly 8:59
now flipping the compute coin completely AWS, Lambda, serverless. What's the architect's view here? Well, the beauty
Chris 9:08
of Lambda is obviously running code without touching servers paper invocation scales automatically. It's
Kelly 9:14
powerful, but you mentioned cold starts earlier. Ah, yes, the cold
Chris 9:17
start if a function hasn't run recently, there can be a small delay while AWS spins up the environment. Usually it's tiny milliseconds, but for Super latency sensitive stuff, you have to architect around it. Maybe use provision concurrency. Okay? And VPC access by default. Lambda runs outside any VPC, but you can configure it to run inside your VPC if it needs to access private resources, like an RDS database that isn't exposed publicly.
Kelly 9:43
Good point speaking of databases, let's talk RDS and Aurora manage relational databases. What are the must
Chris 9:49
knows RDS takes away so much operational pain, automated backups, patching, scaling, encryption, using KMS, architecturally. The big feature is multi-AZ deployments for high availability. Yeah, exactly. It keeps a synchronous standby replica in another. AZ, if the primary fails, RDS automatically fails over, seamless. Ha, then for scaling reads, you use read replicas, asynchronous replication, right? Async. You can have up to five offload read traffic, even put them in different regions. Now, Amazon, Aurora is RDS, but sort of turbocharged. How? So it's MySQL and PostgreSQL compatible, but built for the cloud way faster, like 5x MySQL 3x Postgres performance. The real magic is it's storage. It's distributed. Self Healing makes six copies across three AZs, super durable. Wow. You could have up to 15 Aurora replicas for massive read scaling and Aurora Serverless is brilliant for unpredictable workloads. Scales compute up and down automatically. Very cost effective. If usage is
Kelly 10:45
spiky, that sounds incredibly powerful, okay, shifting gears to connect all these pieces, application integration, decoupling, messaging, what services are key?
Chris 10:53
Application integration is all about breaking monoliths, building resilient systems. SQS, simple queue, services, fundamental here, it's a message queue, standard versus FIFO. Yeah, good distinction. Standard queues give you massive throughput at least once delivery. FIFO queues guarantee order and exactly once processing, but with lower throughput limits. Choosing depends on the use case. The core idea, though, is decoupling. One service puts a message on the queue, another picks it up later. They don't need to know about each
Kelly 11:21
other directly, right? Breaks dependencies, allows independent scaling and failure Exactly.
Chris 11:25
That's the architectural win. It transforms how you build resilient systems.
Kelly 11:30
So SQS for one to one, async, what if I need to broadcast a message to many different places? That's
Chris 11:35
SNS, simple notification service, PubSub. You publish a message to an SNS topic and it pushes it out to all subscribers. Could be email, SMS, Lambda functions, SQS, queues, HTTP endpoints, very flexible. Thomas can be encrypted with KMS too.
Kelly 11:50
Okay. And for creating APIs, the front door for applications,
Chris 11:54
that's API Gateway, fully managed service. You define your API structure, the resources, like URLs, the methods, GT, post it, et cetera. You can create different versions using stages, use custom domains, enable caching for better performance. Handles cores, too. That's always tricky, yep. Handles cores, cross origin, resource sharing for you. And a key thing remember, you have to deploy your API after making changes for them to take effect, catches people out sometimes. Good
Kelly 12:20
tip. All right, one more core service, CloudFront, the CDN. How does this fit into the architecture?
Chris 12:26
CloudFront is all about speed and reducing load on your origin servers. It caches your content, images, videos, API responses at Edge locations all around the world, closer to your users, so users get faster responses Exactly. The main parts are your origins, where the original files live, as three easy to load balancers and the distribution the settings for how CloudFront behaves, which Edge locations to use, cache duration or TTL, you can manually clear the cache using invalidations
Kelly 12:53
and security. Can you restrict who accesses the cached content? Yes,
Chris 12:58
using signed URLs or signed cookies, and typically, you'll use an origin access identity OAI to allow only CloudFront to access your S3 bucket, stopping users from going around CloudFront directly to S3
Kelly 13:09
locks it down nicely. Okay. One last area. What about moving massive amounts of data into AWS from, say, an on premises data centre? That sounds like a huge challenge. It
Chris 13:21
can be, and that's where the AWS snow family comes in. Think Snowball, it's basically a rugged, shippable storage device. You physically ship data. You do if you have, say, 100 terabytes, transferring that over the internet could take ages, maybe months, and cost a lot in bandwidth. With Snowball, AWS ships you the device. You load your data onto it locally, ship it back, and they load it into S3 much faster, often cheaper, for large volumes. Oh, wow. There's Snowball Edge, which has more capacity and even some compute power for pre processing data locally and for absolutely enormous data sets. There's Snowmobile, literally a shipping container on a truck holding up to 100 petabytes a truck full of data. Okay, both Snowball and Snowmobile work for import and export. By the way, getting data out of S3 or Glacier two.
Kelly 14:06
Okay, that's a fantastic rundown of the key services. So bringing it all together, what does this actually mean for you, the listener, prepping for the SA, A, C or three exam? How do you turn all this knowledge into a winning approach?
Chris 14:18
Yeah, this is crucial. How do you shift into that exam mindset? Yeah, the big thing to remember is it's less about recalling exact commands, more about picking the right tool or pattern for the job described in the scenario. And always, always consider the
Kelly 14:33
trade off, trade offs right between cost, performance, security, exactly the
Chris 14:37
questions constantly test your ability to design for those key themes, security, access control, encryption, network isolation, resilience, ha, multi, AZ, read replicas, Auto Scaling, load balancers, performance, cloud front instance types, caching like ElastiCache and cost optimization. S3, tiers, E, R, E, I spot Aurora, serverless, life cycle rules, you need to juggle all of
Kelly 14:59
these. Can you give us? Give you a couple of quick examples, like, how might these concepts show up in a question?
Chris 15:03
Sure, let's try a few quick ones. Imagine a scenario company needs to archive logs for compliance, rarely accessed needs. Lowest possible cost retrieval time up to 12 hours is acceptable. What has three class,
Kelly 15:16
uh, lowest cost, long retrieval sounds like Glacier deep archive spot
Chris 15:20
on or security app on EC2 and a private subnet needs to talk to S3 and DynamoDB, no public internet traffic allowed. How do you connect them securely? That sounds like the
Kelly 15:30
VPC endpoints we talked about, the gateway endpoints specifically for S3 and DynamoDB perfect.
Chris 15:35
See, it's about picking the right service for the constraints. How about ha and read scaling for a database. Need a PostgreSQL database to survive an AZ failure and D improve read performance. What? Two RDS features?
Kelly 15:48
Okay, AZ failure means multi-AZ deployment improving read performance means read replicas. You
Chris 15:53
got it one more, maybe integration. Web app needs a scalable back end to process uploads, trigger a Lambda function, no servers to manage combination,
Kelly 16:00
scalable front door, API Gateway, triggering serverless Lambda, nice
Chris 16:05
and finally, migration, 100 terabytes on prem needs to be in AWS within a week, cost effectively avoid internet transfer.
Kelly 16:14
That's the Snowball, right? The physical transfer Exactly. See how the questions
Chris 16:18
weave together the requirements and force you to make an architectural choice based on those trade offs.
Kelly 16:22
Yeah, that really clarifies it. It's decision making, not just recall any final tips for exam day.
Chris 16:28
Definitely read the AWS white papers, especially architecting for the cloud best practices. It really instils that mindset. And honestly, if you're newer to AWS, don't hesitate to do the cloud practitioner CCP first. It builds that solid foundation. The SSA co three really tests if you can think like an architect, balancing all those pillars we discussed.
Kelly 16:48
Well, that brings us deep dive to a close. We've covered a tonne of ground, from the big picture shifts in AWS to the nitty gritty of key services, all aimed at helping you nail the SAA-C03 and just become a better Cloud Architect. Hopefully this gave you some of those aha moments,
Chris 17:04
and remember, in this fast moving cloud world, just knowing the services isn't enough, it's understanding how they connect, how they fit together to build something truly scalable, secure, cost effective. So maybe think about this. How might one of these services, maybe one you didn't focus on before, completely change how you approach your next design.
Kelly 17:24
Great final thought. Thank you for joining us on The Deep Dive. Until next time, keep diving deep.
