Nine best practices for microservices deployments

What best practices do you follow in deploying microservices in AWS? Assume an AI/ML model needs to be deployed in AWS & exposed as an API endpoint. What are the best practices? Is any guardrail available?

While not a standard procedure & not limited to these, here is what I follow at minimum when I deploy an app service, for example, a Python ML model in AWS.

Please share your thoughts.

1. Ensuring the right VPC for network isolation

2. Containerized hosting of the app service in ECS cluster spun in private subnets for access control

3. ECS cluster spanning multiple AZs for high availability

4. Public subnets in those AZs for hosting i) Bastion hosts, for administrating ECS cluster in the private subnets, & ii) NAT Gateway, for ECS service to access container registry via internet

5. ECS service with auto-scaling & app-load-balancing for high performance & automatic failover

6. Proxying app-load-balancer with API Gateway via VPC Link for access to the app service from external client apps

7. Use of CloudFormation/Terraform to manage infrastructure as software code

8. Use of separate CI/CD pipelines to deploy app servIce & the infrastructure

9. Use of VPC endpoints to access AWS services like S3 10. Centralized logging & monitoring via CloudWatch

Suvo Dutta

I have over 22 years of IT experience in strategy, advisory, innovations, and cloud-based solutions in the Insurance domain. I advise clients in transforming their IT ecosystems to future-ready architectures that can provide exemplary customer experience, improve operating efficiency, enable faster product development and unlock the power of data.

You may also like...