A scalable architecture is often implemented using containerization (e.g., Docker) and orchestrators (like Kubernetes). This approach allows for isolating different components of the application, facilitating their deployment and scaling.
Details:
Code example (yaml manifest for Kubernetes ReplicaSet):
apiVersion: apps/v1 kind: Deployment metadata: name: my-service spec: replicas: 5 selector: matchLabels: app: my-service template: metadata: labels: app: my-service spec: containers: - name: my-service-container image: my-service:latest resources: requests: cpu: "500m" memory: "512Mi" limits: cpu: "1" memory: "1Gi"
Key features:
Can a container have shared access to the filesystem with another container?
Yes, containers can share volumes. In Kubernetes, this is done through shared PersistentVolume or EmptyDir.
Code example:
volumes: - name: shared-data emptyDir: {}
What happens if in Kubernetes only pods are scaled without scaling the database?
Services may start to lag, and the database will become a bottleneck. It is important to ensure horizontal or vertical scaling of all "bottlenecks".
Can a container remain running when the orchestration cluster fails?
The container can remain running, but management, restarting, and autoscaling will be impossible without the control component (cluster controller).