ahmedjama.com

Tech | Insights | Inspiration

How Google Cloud Run Deploys Code Without a Dockerfile

Ever wondered what happens under the hood when you deploy to Google Cloud Run without a Dockerfile? I dive deep into the entire process from source code to running service.

Ahmed Jama

5-Minute Read

google_cloud_run

Introduction

I was playing around with Google Cloud Run the other day and stumbled upon something rather fascinating. I’d created a simple Python application with just two files - main.py and requirements.txt - and decided to deploy it using the --source flag. What happened next absolutely blew my mind.

gcloud run deploy --source .

That single command took my local Python files and magically turned them into a fully containerised, globally accessible web service. No Dockerfile in sight. But what’s actually happening behind the scenes?

The Initial Mystery

Here’s what I saw in my terminal:

(guvenv) ubuntu@ubuntu:~/cloudrun-example$ gcloud run deploy --source .
Service name (cloudrun-example):
The following APIs are not enabled on project [hoffmanbeeds]:
        artifactregistry.googleapis.com
        cloudbuild.googleapis.com
        run.googleapis.com
Do you want enable these APIs to continue (this will take a few minutes)? (Y/n)?  y
Enabling APIs on project [hoffmanbeeds]...

Right off the bat, I noticed it was enabling multiple APIs I hadn’t explicitly requested. This was my first clue that there was a lot more happening than meets the eye.

After selecting a region and agreeing to create an Artifact Registry repository, I watched as this happened:

Building using Buildpacks and deploying container to Cloud Run service [cloudrun-example]
✓ Building and deploying new service... Done.
  ✓ Uploading sources...
  ✓ Building Container...
  ✓ Creating Revision...
  ✓ Routing traffic...
  ✓ Setting IAM Policy...

My directory contained nothing more than:

total 16
-rw-rw-r--  1 ubuntu ubuntu  312 Jun 24 10:46 main.py
-rw-rw-r--  1 ubuntu ubuntu   47 Jun 24 10:46 requirements.txt

Yet somehow, Google Cloud had taken these two simple files and created a fully functional containerised service. I needed to understand how.

Enter Buildpacks: The Magic Behind the Curtain

The key to understanding this process lies in something called Buildpacks. When I ran that deploy command, Google Cloud Build examined my directory and made some intelligent deductions:

  • Spotted main.py → “This is Python code”
  • Found requirements.txt → “This has Python dependencies”
  • Concluded → “This is a Python web application”

Based on this detective work, it automatically selected the appropriate Python Buildpack - essentially a pre-built set of scripts that know exactly how to handle Python applications.

The Complete Under-the-Hood Process

After digging deeper, I discovered there’s actually a sophisticated multi-stage process happening:

Stage 1: Source Code Preparation & Upload

When I executed that command, the gcloud CLI didn’t just wave a magic wand. It:

  1. Scanned my current directory and created a temporary ZIP archive of my source code
  2. Uploaded this ZIP file to Cloud Storage - specifically to a staging bucket that gets created automatically
  3. Excluded common ignore patterns like .git directories and __pycache__ folders

This ZIP file becomes the source of truth for the entire deployment process.

Stage 2: Cloud Build Orchestration

Here’s where things get really interesting:

  1. Cloud Build receives a build request pointing to my ZIP file in Cloud Storage
  2. A fresh build environment (essentially a virtual machine) spins up
  3. The build environment downloads my ZIP file from Cloud Storage
  4. Source code gets extracted and prepared for the build process

Stage 3: Buildpack Magic

This is where the real cleverness happens:

  1. Language Detection: The buildpack examines my files and confirms it’s dealing with Python
  2. Framework Analysis: It looks at the code structure to understand how to run it
  3. Dependency Installation: Automatically runs pip install -r requirements.txt
  4. Container Creation: Builds a proper Docker image containing my application

The buildpack essentially creates the Dockerfile I never wrote, handling all the configuration details like:

  • Setting up the Python runtime environment
  • Installing dependencies
  • Configuring the web server (likely gunicorn)
  • Setting appropriate environment variables
  • Creating an optimised container image

Stage 4: Artifact Registry Storage

Once the container image is built:

  1. The image gets pushed to Artifact Registry (hence why that API needed enabling)
  2. A repository named cloud-run-source-deploy is created in my chosen region
  3. The image is tagged and stored for deployment and future reference

Stage 5: Cloud Run Deployment

Finally, the actual deployment:

  1. Cloud Run pulls the container image from Artifact Registry
  2. Creates a new revision of my service
  3. Configures networking, scaling, and security settings
  4. Routes 100% of traffic to the new revision

The Hidden Infrastructure

What I found particularly fascinating is the temporary infrastructure that gets created:

  • Cloud Storage Staging Bucket: Holds my source ZIP file
  • Cloud Build Job: Orchestrates the entire build process
  • Artifact Registry Repository: Stores the container image
  • Cloud Run Service: Runs my containerised application

The data flow looks like this:

Local Files → ZIP Archive → Cloud Storage → Cloud Build → 
Buildpack Processing → Container Image → Artifact Registry → 
Cloud Run Deployment → Live Service

Why This Architecture Is Brilliant

The more I thought about it, the more I appreciated the elegance of this approach:

Decoupling: Each Google Cloud service has a specific responsibility. Cloud Storage handles file transfer, Cloud Build manages the build process, Artifact Registry stores container images, and Cloud Run executes them.

Reproducibility: The ZIP file in Cloud Storage serves as a permanent record of exactly what was deployed. I can always trace back to see precisely what code version is running.

Security: The build happens in isolated, secure environments rather than on my local machine.

Scalability: Cloud Build can handle massive parallel builds without me worrying about infrastructure.

The Cleanup (Or Lack Thereof)

Interestingly, most of these resources persist after deployment:

  • The ZIP file remains in Cloud Storage for potential rollbacks
  • Container images are kept for versioning
  • Build logs stay available for debugging and auditing

This persistence is actually a feature, not a bug - it enables easy rollbacks and maintains a complete audit trail.

What I Learned

This deep dive taught me that modern cloud platforms are doing far more heavy lifting than we might realise. That simple gcloud run deploy --source . command orchestrates a sophisticated pipeline involving multiple Google Cloud services, all working together seamlessly.

The next time someone asks me how containerisation works, I won’t just explain Docker and Dockerfiles. I’ll tell them about Buildpacks and how they’re making deployment accessible to developers who just want to focus on writing code, not configuring infrastructure.

It’s remarkable how much complexity can be hidden behind such a simple command. Google Cloud has essentially automated the entire journey from source code to running service, making it feel like magic when it’s actually just very well-engineered software.

Say Something

Comments

Nothing yet.

Recent Posts

categories

About

This blog is a space for exploring both the technical and thought-provoking aspects of technology, sharing insights and breaking down complex concepts in an accessible and engaging way.