How Google Cloud Run Deploys Code Without a Dockerfile
Ever wondered what happens under the hood when you deploy to Google Cloud Run without a Dockerfile? I dive deep into the entire process from source code to running service.

Introduction
I was playing around with Google Cloud Run the other day and stumbled upon something rather fascinating. I’d created a simple Python application with just two files - main.py
and requirements.txt
- and decided to deploy it using the --source
flag. What happened next absolutely blew my mind.
gcloud run deploy --source .
That single command took my local Python files and magically turned them into a fully containerised, globally accessible web service. No Dockerfile in sight. But what’s actually happening behind the scenes?
The Initial Mystery
Here’s what I saw in my terminal:
(guvenv) ubuntu@ubuntu:~/cloudrun-example$ gcloud run deploy --source .
Service name (cloudrun-example):
The following APIs are not enabled on project [hoffmanbeeds]:
artifactregistry.googleapis.com
cloudbuild.googleapis.com
run.googleapis.com
Do you want enable these APIs to continue (this will take a few minutes)? (Y/n)? y
Enabling APIs on project [hoffmanbeeds]...
Right off the bat, I noticed it was enabling multiple APIs I hadn’t explicitly requested. This was my first clue that there was a lot more happening than meets the eye.
After selecting a region and agreeing to create an Artifact Registry repository, I watched as this happened:
Building using Buildpacks and deploying container to Cloud Run service [cloudrun-example]
✓ Building and deploying new service... Done.
✓ Uploading sources...
✓ Building Container...
✓ Creating Revision...
✓ Routing traffic...
✓ Setting IAM Policy...
My directory contained nothing more than:
total 16
-rw-rw-r-- 1 ubuntu ubuntu 312 Jun 24 10:46 main.py
-rw-rw-r-- 1 ubuntu ubuntu 47 Jun 24 10:46 requirements.txt
Yet somehow, Google Cloud had taken these two simple files and created a fully functional containerised service. I needed to understand how.
Enter Buildpacks: The Magic Behind the Curtain
The key to understanding this process lies in something called Buildpacks. When I ran that deploy command, Google Cloud Build examined my directory and made some intelligent deductions:
- Spotted
main.py
→ “This is Python code” - Found
requirements.txt
→ “This has Python dependencies” - Concluded → “This is a Python web application”
Based on this detective work, it automatically selected the appropriate Python Buildpack - essentially a pre-built set of scripts that know exactly how to handle Python applications.
The Complete Under-the-Hood Process
After digging deeper, I discovered there’s actually a sophisticated multi-stage process happening:
Stage 1: Source Code Preparation & Upload
When I executed that command, the gcloud
CLI didn’t just wave a magic wand. It:
- Scanned my current directory and created a temporary ZIP archive of my source code
- Uploaded this ZIP file to Cloud Storage - specifically to a staging bucket that gets created automatically
- Excluded common ignore patterns like
.git
directories and__pycache__
folders
This ZIP file becomes the source of truth for the entire deployment process.
Stage 2: Cloud Build Orchestration
Here’s where things get really interesting:
- Cloud Build receives a build request pointing to my ZIP file in Cloud Storage
- A fresh build environment (essentially a virtual machine) spins up
- The build environment downloads my ZIP file from Cloud Storage
- Source code gets extracted and prepared for the build process
Stage 3: Buildpack Magic
This is where the real cleverness happens:
- Language Detection: The buildpack examines my files and confirms it’s dealing with Python
- Framework Analysis: It looks at the code structure to understand how to run it
- Dependency Installation: Automatically runs
pip install -r requirements.txt
- Container Creation: Builds a proper Docker image containing my application
The buildpack essentially creates the Dockerfile I never wrote, handling all the configuration details like:
- Setting up the Python runtime environment
- Installing dependencies
- Configuring the web server (likely gunicorn)
- Setting appropriate environment variables
- Creating an optimised container image
Stage 4: Artifact Registry Storage
Once the container image is built:
- The image gets pushed to Artifact Registry (hence why that API needed enabling)
- A repository named
cloud-run-source-deploy
is created in my chosen region - The image is tagged and stored for deployment and future reference
Stage 5: Cloud Run Deployment
Finally, the actual deployment:
- Cloud Run pulls the container image from Artifact Registry
- Creates a new revision of my service
- Configures networking, scaling, and security settings
- Routes 100% of traffic to the new revision
The Hidden Infrastructure
What I found particularly fascinating is the temporary infrastructure that gets created:
- Cloud Storage Staging Bucket: Holds my source ZIP file
- Cloud Build Job: Orchestrates the entire build process
- Artifact Registry Repository: Stores the container image
- Cloud Run Service: Runs my containerised application
The data flow looks like this:
Local Files → ZIP Archive → Cloud Storage → Cloud Build →
Buildpack Processing → Container Image → Artifact Registry →
Cloud Run Deployment → Live Service
Why This Architecture Is Brilliant
The more I thought about it, the more I appreciated the elegance of this approach:
Decoupling: Each Google Cloud service has a specific responsibility. Cloud Storage handles file transfer, Cloud Build manages the build process, Artifact Registry stores container images, and Cloud Run executes them.
Reproducibility: The ZIP file in Cloud Storage serves as a permanent record of exactly what was deployed. I can always trace back to see precisely what code version is running.
Security: The build happens in isolated, secure environments rather than on my local machine.
Scalability: Cloud Build can handle massive parallel builds without me worrying about infrastructure.
The Cleanup (Or Lack Thereof)
Interestingly, most of these resources persist after deployment:
- The ZIP file remains in Cloud Storage for potential rollbacks
- Container images are kept for versioning
- Build logs stay available for debugging and auditing
This persistence is actually a feature, not a bug - it enables easy rollbacks and maintains a complete audit trail.
What I Learned
This deep dive taught me that modern cloud platforms are doing far more heavy lifting than we might realise. That simple gcloud run deploy --source .
command orchestrates a sophisticated pipeline involving multiple Google Cloud services, all working together seamlessly.
The next time someone asks me how containerisation works, I won’t just explain Docker and Dockerfiles. I’ll tell them about Buildpacks and how they’re making deployment accessible to developers who just want to focus on writing code, not configuring infrastructure.
It’s remarkable how much complexity can be hidden behind such a simple command. Google Cloud has essentially automated the entire journey from source code to running service, making it feel like magic when it’s actually just very well-engineered software.