Pillow, numpy, imagehash via macbook for AWS lambda

requirements.txt

requests==2.32.0
imagehash==4.3.2
  1. download numpy whl file of numpy from https://pypi.org/project/numpy/#files
  2. download scipy whl the same way
  3. scipy-1.15.3-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
  4. build.sh
#!/bin/zsh
set -e
ZIP_NAME="lambda_package.zip"

LAMBDA_DIR="lambda_build"

rm -rf "$LAMBDA_DIR" "$ZIP_NAME"
mkdir "$LAMBDA_DIR"

cp lambda_function.py requirements.txt "$LAMBDA_DIR"

docker run --rm -v "$PWD/$LAMBDA_DIR":/var/task python:3.13 \
/bin/bash -c "
pip install -r /var/task/requirements.txt -t /var/task &&
pip install --platform manylinux2014_x86_64 --target=python/lib/python3.13/site-packages --implementation cp --python-version 3.13 --only-binary=:all: --upgrade numpy
rm /var/task/requirements.txt
"

cd "$LAMBDA_DIR"
rm -rf PIL
rm -rf pillow-11.2.1.dist-info
rm -rf pillow.libs
rm -rf numpy.libs
rm -rf numpy-2.2.6.dist-info
rm -rf numpy

unzip -o ../numpy-2.2.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl -d .
unzip -o ../scipy-1.15.3-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl -d .
zip -r9 "../$ZIP_NAME" .
cd ..

rm -rf "$LAMBDA_DIR"

2. find a layer via aws. it has a name: “Lambda Layer with Pillow image processing library”

3. link you function with the layer

Repack gitlab docker image to AWS ECR

export AWS_ACCESS_KEY_ID=KEY
export AWS_SECRET_ACCESS_KEY=SECRET

AWS_REGION=us-east-1
AWS_ACCOUNT_ID=myaccountid

DOCKER_USERNAME=gitlab+deploy-token-1011552
DOCKER_PASSWORD=SECRETPASSWORD


GITLAB_TAG=registry.gitlab.com/username/project/containername:latest
AWS_TAG=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/containername

docker login registry.gitlab.com -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com

docker pull $GITLAB_TAG
docker tag $GITLAB_TAG $AWS_TAG
docker push $AWS_TAG

Repack x86 java app to arm64

DOCKER_USERNAME=USERNAME
DOCKER_PASSWORD=PASSWORD

docker login registry.gitlab.com -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
docker pull registry.gitlab.com/username/project/fileservice:latest

container_id=$(docker create “registry.gitlab.com/username/fileservice:latest”)
docker cp “$container_id:/app.jar” “/opt/openp-fs/app.jar”
docker rm “$container_id”

docker build -t registry.gitlab.com/username/fileservice:latest .

FROM arm64v8/adoptopenjdk:16-jre
COPY app.jar /app.jar
ENTRYPOINT java $JAVA_OPTS -Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom -jar /app.jar

AWS terraform user to use pipeline

  1. Create a user with minimum permissions
  2. Create a role with Administrator access
  3. Add Trust Relationship for that user
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::11111111:user/terraform_user"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

keep in mind, that root means all users in the AWS account

4. Add attached polity to user

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::11111111:role/terraform_role"
        }
    ]
}

Eventually, you can use that role to deploy all appropriate resources in the entire account.

Terraform: dial tcp [::1]:80: connect: connection refused

╷
│ Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp [::1]:80: connect: connection refused
│ 
│   with module.eks.kubernetes_config_map.aws_auth[0],
│   on .terraform/modules/eks/aws_auth.tf line 63, in resource "kubernetes_config_map" "aws_auth":
│   63: resource "kubernetes_config_map" "aws_auth" {

It means that kube config not found by default path.

It can be set by the following command

export KUBECONFIG=~/.kube/custom-config.conf

Another way to check provider section. parameter config_path

provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
config_context = data.aws_eks_cluster.cluster.arn
config_path = var.eks_kube_config_path
}

eventually, i would remove state aws_auth and disable terraform for managing that map:

terraform state rm 'module.eks.kubernetes_config_map.aws_auth'

module "eks" {
  manage_aws_auth = false
}

Stick pod to AWS region

If we need to run 1 pod exactly in specific region (because of ebs for example) we need to mark appropriate node by specific label.


kubectl label node ip-10-0-1-249.ec2.internal region=us-east-1a
kubectl label node ip-10-0-2-27.ec2.internal region=us-east-1b
kubectl label node ip-10-0-3-244.ec2.internal region=us-east-1c

Let’s check what labels nodes have

kubectl get nodes --show-labels

We should see something like this:

ip-10-0-3-244.ec2.internal   Ready    <none>   30d   v1.21.2-eks-55daa9d   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1c,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-3-244.ec2.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=t3.medium,region=us-east-1c,topology.kubernetes.io/region=us-east-1,topology.kubernetes.io/zone=us-east-1c

Now need to add nodeSelector to deployment or pod definition

spec: 
  replicas: 1
  selector: 
    matchLabels: 
      app: specificapp
  template: 
    metadata: 
      labels: 
        app: specificapp
    spec: 
      containers: 
        - 
          image: "specificapp:1.0"
          name: specificapp
          ports: 
            - 
              containerPort: 80
      nodeSelector: 
        region: us-east-1

That’s all

Kong with DB admin UI

version: '3.9'

x-kong-config: &kong-env
  KONG_DATABASE: postgres
  KONG_PG_HOST: 192.168.2.48
  KONG_PG_DATABASE: kong
  KONG_PG_USER: postgres
  KONG_PG_PASSWORD: system
  KONG_PG_PORT: 5432
  KONG_PG_SCHEMA: public

services:
  migration:
    image: kong:latest
    environment:
      <<: *kong-env
    command: kong migrations bootstrap
  konga:
    image: pantsel/konga
    environment:
      NODE_ENV: production
    ports:
      - "1337:1337"

  ui:
    image: pocketdigi/kong-admin-ui:0.5.3
    ports:
      - "8899:80"
  kong:
    image: kong:latest
    environment:
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_PROXY_LISTEN: 0.0.0.0:8000
      KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      #KONG_DECLARATIVE_CONFIG: "/opt/kong/kong.yaml"
      <<: *kong-env
    ports:
      - "8000:8000"
      - "8444:8444"
      - "8001:8001"
    volumes:
      - ./kong:/opt/kong

Design a site like this with WordPress.com
Get started