Overview

Firebase allows developers to create a fully featured backend on top of servers and APIs operated by Google.

overall benefits

  • solid developer experience
  • it scales to world-scale use
  • generous free-tier and pay-as-you-go model
  • high quality docs, well supported by AI models
  • actively developed and maintained

main backend components covered in this document

  • authentication with Firebase Auth
  • database with Cloud Firestore
  • storage with Cloud Storage
  • serverless functions with Cloud Functions

focus of this document: web-centric

We create a backend for web-apps, and use the web-centric client SDKs. We default to TypeScript, and pick Node.js as the runtime for Cloud Functions.

CLI tool

The Firebase CLI tool enables several workflows:

  • Emulate the Firebase backend locally, to run it and debug it at no cost.
  • Scaffold the Cloud Functions' directory, and deploy Cloud Functions.
  • Submit secrets or API keys to Google, to make them available in Cloud Functions.
  • Add and deploy security rules.
  • List the Firebase projects linked to the Google account.

the CLI executable

The firebase-tools npm package provides the firebase CLI executable.

npm install -g firebase-tools
firebase

Release notes

underlying Google account

Firebase projects are linked to a Google account.

firebase login:list # prints current Google account
firebase login
firebase logout

list projects and select one

firebase projects:list
firebase use imagetales

project configuration and scaffolding

The init command enables several workflows, such as:

  • scaffold the Cloud Functions directory
  • set up and configure emulators
  • add security rules for Firestore and Cloud Storage
firebase init

help

  • print the list of Firebase commands.
  • print the details about a given command.
firebase help

firebase help emulators:start
firebase help deploy

list deployed functions, deploy functions

firebase functions:list

firebase deploy --only functions
firebase deploy --only functions:requestPlanet

manage secrets

firebase functions:secrets:access ABC_API_KEY
firebase functions:secrets:set ABC_API_KEY
firebase functions:secrets:destroy ABC_API_KEY

start and config emulators

firebase emulators:start
firebase emulators:start --import emulator-data --export-on-exit

We specify which emulators to run with firebase.json. We set the port or rely on the default one if omitted. We scaffold this file with firebase init.

{
    "emulators": {
        "firestore": { "port": 8080 },
        "auth": { "port": 9099 },
        "functions": { "port": 5001 },
        "storage": { "port": 9199 },
        "ui": { "enabled": true }
    },
    "storage": { "rules": "storage.rules" },
    "firestore": {
        "rules": "firestore.rules",
        "indexes": "firestore.indexes.json"
    },
    "functions": [
        /* ... */
    ]
}

deploy security rules

The storage emulator requires storage access rules.

  • We define Storage rules in storage.rules.
  • We define Firestore rules in firestore.rules
firebase deploy --only storage
firebase deploy --only firestore:rules

gcloud: Google Cloud CLI tool

gcloud enables some operations not available with the firebase tool, such as listing secrets of a given project or describing a Storage bucket.

We call gcloud from the Google Cloud Console's Cloud Shell (it is pre-installed), or we install it locally from an archive provided by Google.

gcloud secrets list --project <PROJECT_ID>
gcloud storage buckets describe gs://abcd.firebasestorage.app

SDKs

Interact with the backend with the help of SDKs. We use Javascript SDKs.

client SDKs

The client SDKs run on unprivileged clients, such as browsers. It can also run in a Node.js app that wants to act as an (unprivileged) client.

npm i firebase

admin SDK: privileged environments

The admin SDK is designed to run on secure, privileged environments.

The admin SDK authenticates itself against Google servers by using a privileged account called a service account. Service accounts are automatically created by Google, scoped to a Firebase project and have specific entitlements. The admin SDK skips user-centric authentication and is not subject to security rules (which are designed to control untrusted requests).

We primarily use the admin SDK within Cloud Functions, an environment pre-configured by Google with the appropriate service account. The admin SDK detects it and uses it.

We use the Node.js admin SDK:

npm i firebase-admin

Cloud Functions SDK

We define Cloud Functions with the (Node.js) Cloud Functions SDK.

We have the package listed as a dependency after scaffolding the Cloud Functions directory with firebase init.

"firebase-functions": "^7.0.0",

Project setup and initialization

identify the Firebase project (client SDK)

The config object stores credentials to identify the Firebase project when interacting with Google servers. These credentials are not sensitive or confidential per se since they only serve to identify the project, and they are exposed on the client.

const firebaseConfig = {
    apiKey: "....",
    authDomain: ".....firebaseapp.com",
    projectId: "....",
    storageBucket: ".....firebasestorage.app",
    messagingSenderId: "....",
    appId: "....",
}

register one or more configs

We give the config to the client SDK. It returns a helper object that we initialize other services with.

const app = initializeApp(firebaseConfig)

When working with several Firebase projects, we get a helper for each project. The first helper has a "[DEFAULT]" internal string identifier. We must provide a string identifier for additional project we want to work with.

const app1 = initializeApp(firebaseConfig1)
const app2 = initializeApp(firebaseConfig2, "two")

When initializing the admin SDK from Cloud Functions, the environment is automatically configured: we don't have a config object at all, and we get a helper config-less.

const app = initializeApp()

Auth Overview

authenticate app users

The Auth client SDK authenticates users and notify the app about Auth events. It provides several authentication flows.

auth helper and reading currentUser across the app

We keep a reference to the auth helper to read currentUser. We also provide this helper when using some auth related functions.

const auth = getAuth(app)
auth.currentUser // User | null

currentUser starts as null. When the SDK has finished loading, and given that the user has logged-in, currentUser switches to a User instance.

As a User instance, It holds the user unique identifier (uid). Other properties may be empty:

currentUser.uid
currentUser.email
currentUser.phoneNumber
currentUser.displayName
currentUser.isAnonymous

react to authentication events

We register a callback on onAuthStateChanged, which Firebase runs on auth events. Firebase gives us a user object (of type User | null).

onAuthStateChanged(auth, (user) => {
    if (user) {
        // user.uid
    }
})

Auth events:

  • the auth SDK has finished loading and no user is authenticated

  • the user has registered (sign up)

  • the user has logged in (sign in)

  • the user has logged out (sign out)

Login occurs in three specific scenarios:

  • the user fills the standard login form or logs in through an identity provider (hard-login)
  • the user is recognized by the SDK and is logged in automatically (credentials stored in browser)
  • (canonically a registration) the user is automatically logged-in after a successful sign-up. Note: a single authentication event occurs.

React patterns

We make the authentication status part of the React state. For example, we work with a isSignedIn variable. We make the display of the authenticated area conditional on isSignedIn being true.

On page load, the Auth SDK is loading: If we initialize isSignedIn to false, it may not reflect the Auth reality, and may instantly switch to true once the SDK is loaded, which triggers a UI flicker.

It's best to wait for the SDK to load before making any use of isSignedIn. As such, we track the loading state in a one-off state variable, which becomes true on the first authentication event. Only then do we read isSignedIn.

const [hasLoaded, setHasLoaded] = useState(false)
const [isSignedIn, setisSignedIn] = useState(false)

useEffect(() => {
    const unsub = onAuthStateChanged(auth, (user) => {
        setHasLoaded(true)
        setisSignedIn(Boolean(user))
    })
    return unsub
}, []) // subscribe once, subscribe automatically.

if (!hasLoaded) return null
if (!isSignedIn) return <Lobby />
return <Ingame />

sign out

sign out is consistent across all authentication flows:

signOut(auth)

Email-Password accounts

A provider that relies on collecting the user's email and password.

registration and hard-login

register:

createUserWithEmailAndPassword(auth, email, password).then((credential) => {
    credential.user // User
})

hard login:

signInWithEmailAndPassword(auth, email, password).then((credential) => {
    credential.user // User
})

send a password reset email

We ask Firebase to send a password-reset email to the provided email. We can customize the email content through the Firebase console:

sendPasswordResetEmail(auth, email)

email account's providerData (implementation detail)

Note: password is the providerId value for the email-password provider.

{
    "providerData": [
        {
            "providerId": "password",
            "uid": "user@example.com",
            "email": "user@example.com",
            "displayName": null,
            "phoneNumber": null,
            "photoURL": null
        }
    ]
}

Identity Providers

We allow users to authenticate with an external provider account, such as a Google account or an Apple account.

select one or several providers

Note: We enable providers in the Firebase console.

const gProvider = new GoogleAuthProvider() // Google Provider

authentication flows

Possible flows:

  • the user authenticates through a popup window.
  • the user authenticates through a redirect.

Flows handle both sign-in and sign-up: we describe a flow with a generic control label:

  • "Authenticate with Foo"
  • "Continue with Foo"

Both flows trigger an authentication event on success. They return a credential (UserCredential), that embeds the user object:

const credential = await signInWithPopup(auth, gProvider)
credential.user // User

Note: We can detect it is a new user through a helper method:

const userInfo = getAdditionalUserInfo(credential)
if (userInfo?.isNewUser) {
}

popup flow

The popup flow may fail if the browser doesn't allow popups.

const credential = await signInWithPopup(auth, gProvider)

redirect flow

The redirect flow relies on navigating to another page and navigating back.

It requires extra work unless the website is hosted on Firebase Hosting.

Anonymous account

Register an account with no personal information from the user.

signInAnonymously(auth)

The generated credentials are stored in the browser: the user cannot access the account from another device, and cannot recover the account if credentials are lost.

The creation of an anonymous account is partially supported by Auth-triggered Cloud Functions:

  • it triggers the v1's user().onCreate() cloud function.
  • it doesn't trigger the blocking beforeUserCreated() cloud function (as of now).

check if the account is anonymous

On the client, we check isAnonymous:

auth.currentUser?.isAnonymous // true for anonymous accounts

In auth-triggered Cloud Functions, we read providerData (from the UserRecord).

export const onRegisterNonBlocking = auth.user().onCreate(async (userRecord) => {
    userRecord.providerData.length === 0 // true for anonymous accounts
})

convert to a non-anonymous account

We link to another provider. Since the user already exists (currentUser), we provide it to the link function.

Link to an email credential, after collecting the email address and password:

const emailCred = EmailAuthProvider.credential(email, password)
await linkWithCredential(auth.currentUser, emailCred)

Link to an identity provider, with a popup:

const gProvider = new GoogleAuthProvider()
const result = await linkWithPopup(auth.currentUser, gProvider)

Manage users

We manage users with the Auth Admin-SDK:

import { getAuth } from "firebase-admin/auth"
const auth = getAuth()

list users

listUsers() fetches at most 1000 users at once. If we have more users, we use pagination.

const result = await auth.listUsers() // implied 1000 max
const users = result.users

users.forEach((user) => {
    user // UserRecord

    user.uid
    user.email

    // HTTP-date string (RFC 1123)
    user.metadata.creationTime // "Tue, 13 Jun 2023 17:00:00 GMT"
    user.metadata.lastSignInTime // "Wed, 14 Jun 2023 17:00:00 GMT"
})

Firestore

conceptual

Firestore is a database made of schema-less collections and documents. It is a NoSQL database that is most similar to MongoDB.

A collection is a set of documents.

A document is a set of fields holding primitive data types (number, string, timestamps...). A document has up to 20k fields and stores up to 1 MiB of data.

A reference serves to identify a collection or a document in the database. It doesn't guarantee the collection or document existence: It's merely a path that may point to nothing.

firestore reference

firebase-admin is a wrapper around @google-cloud/firestore. It has the same syntax and capabilities.

import paths

"firebase/firestore" // client SDK
"firebase/firestore/lite" // client SDK

"firebase-admin/firestore" // admin SDK

helper object

We init a db object, for use in Firestore-related functions.

// const app = initializeApp()
const db = getFirestore(app)

Collection

Collection Reference

use the collection reference

We use the collection reference to:

  • fetch all documents (it acts as a query): getDocs(colRef)

  • build a query targeting the collection: query(colRef, filters..)

  • build a document reference (random-ID): doc(colRef), or one that refers to a specific document: doc(colRef, docId)

  • add a document to the collection, (random ID, generated on the fly): addDoc(colRef, data).

build a collection reference

We use a path to identify the collection (uniquely). Root collections have the simplest path, such as "users" (no starting slash). Sub-collection paths are built from several components.

We set the path as:

  • a single string, with slash separators.

  • a sequence of strings, with no slash separators.

const colRef = collection(db, "users")
const colRef = collection(db, `users/${uid}/custom_list`)
const colRef = collection(db, "users", uid, "custom_list")
const colRef = db.collection(`users/${uid}/custom_list`) // sane

TypeScript: set the document's type at the collection level.

Collections are schema-less: they don't define the shape of their documents.

When receiving document data from the database, the client SDK checks the actual data and instantiates documents with it. The instantiated documents are of any shape and may differ from one another.

The instantiated documents are typed as DocumentData, which is a loose type that doesn't provide information about the content.

We provide a more precise type at the collection reference level. We do it through a type assertion:

const colRef = collection(db, "players") as CollectionReference<Player, Player>

Instantiated documents are now of type Player.

Converter

The SDK supports having two document shapes on the client:

CollectionReference<AppModelType, DbModelType>

DbModel is the representation of the received data, aka the object that the SDK instantiates as a direct translation of the received data, with no transformation. It is DocumentData by default.

We can add a converter to transform it into a different shape for use in the app.

AppModel represents the object as it is after the converter's transformation. It also defaults to DocumentData. We set it to whatever type the converter converts to.

Before sending to Firestore, the converter transforms back AppModel to DbModel.

Transformation examples:

  • We transform the DbModel's Timestamp field to an AppModel Date field.
  • We add properties to AppModel.

implement the converter

We transform the documents at the app boundaries:

  • upon receiving from Firestore (fromFirestore())
  • upon sending to Firestore (toFirestore())

We define the functions and add them to the converter.

fromFirestore() takes the snapshot as instantiated:

fromFirestore(snapshot: QueryDocumentSnapshot<FirestoreWorkout>): Workout{
		// to client shape
		const firestoreWorkout = snapshot.data()
		const workout = { ...firestoreItem, date: firestoreItem.date.toDate()}
     return workout
    }

toFirestore() takes the object in its app-side shape.

toFirestore(workout: Workout) {
		// to database shape
        	return { ...workout, date: Timestamp.fromDate(workout.date)}
    }

We gather the transforms in the converter (FirestoreDataConverter). While the type may be inferred from the transforms, we may still add them for safety.

// FirestoreDataConverter<AppModel, DbModel>
const myConverter: FirestoreDataConverter<Workout, FirestoreWorkout> = {
    toFirestore() {},
    fromFirestore() {},
}

We attach it to the collection reference to let it type its documents.

const colRef = collection(db, "players").withConverter(conv)

Document

Document reference

The document reference identifies a document within the database, and embeds meta information:

docRef.id // "Nk....WQ"
docRef.path // "users/Nk....WQ"
docRef.parent // colRef

use the document reference

We use the reference for most CRUD operations:

  • read the document: getDoc

  • update an existing document (it errors if the document doesn't exist): updateDoc

  • delete the document: deleteDoc

  • create the document, or override an existing one (upsert): setDoc

build a document reference

The document's path identifies it uniquely. We set the path as a single string or build it from string components.

const docRef = doc(db, "users", id)
const docRef = doc(db, "users/Nk....WQ")

const docRef = collectionRef.doc("NkJz11WQ") // admin sdk

Alternatively, we provide the collectionRef and the document ID. If we omit the ID, the SDK generates a random one.

const docRef = doc(collectionRef, id)
const docRef = doc(collectionRef) // randomized ID

read document at reference (get)

The get operation succeeds even if no document exists: Checking for a document existence is a valid read.

The function returns a Document snapshot, which may be empty:

getDoc(docRef) // DocumentSnapshot
docRef.get() // DocumentSnapshot

Document snapshot

The Document snapshot is a wrapper that doesn't guarantee the document existence. It exposes the document (or its absence) via a getter. Unless we provide a more specific type, the document's type is DocumentData.

Note: data() is a function because it accepts some configuration.

docSnapshot.exists()
docSnapshot.data() // DocumentData | undefined

It also exposes helpers and metadata.

docSnapshot.id // NkJ...7f
docSnapshot.ref // DocumentReference
docSnapshot.metadata // SnapshotMetadata

Query a specific field

docSnapshot.get("address.zipCode") // low use

real-time listener

Set up a real-time listener on a document reference:

const unsub = onSnapshot(docRef, (docSnapshot) => {
    docSnapshot.data() // DocumentData | undefined
})

Query

overview

A query matches documents based on a set of criteria, instead of pre-defined document references.

the result of a query: a query snapshot

The query snapshot hosts the list of document snapshots (docs). The list is empty when no match occurred.

The document snapshots are of type QueryDocumentSnapshot (not DocumentSnapshot) but they have the same API surface. They are guaranteed to have an underlying document at snapshot.data() (this is the difference).

querySnaptshot.docs // list of document snapshots
querySnaptshot.empty
const cats = querySnapshot.docs.map((snap) => snap.data())

a collection reference is a query

A collection ref is technically a query and is used to target all documents in a read (get):

getDocs(q)
getDocs(colRef)

q.get()
colRef.get()

build a query

We add value-based filters, set the order and limit the count:

const q = query(colRef, where(..), where(..), orderBy(..), limit(..))
const q = collection(..).where(..).orderBy(..).limit(..)

where filter: look for documents with a given value

We filter documents based on a value we want to find in a property. We request an exact value or one within a range. Depending on the data, we expect a single match at most or several.

Note: documents that do not possess the property are filtered out.

For example, we look for the document whose id is of value user.id.

where(propertyName, operator, value)
where("id", "==", user.id)

set the requirement for the value: exact match, being different, being smaller or larger, exact match with at least one value, or different from all values.

==
!=

<
<=
>
>=

"in" // the property is equal to either A, B or C
"not-in" // the property is different from A, B and C.

We can also ask the value to be included or excluded from the array if the property is an array.

"array-contains" // the array contains this value
"array-contains-any" // the array contains A, B or C..

order documents based on one field

We order documents based on the value of a given field. By default, it sorts documents so that the value is ascending. It's best to set the order explicitly rather than relying on the default ascending order.

orderBy(propertyName, orderDirection)
orderBy("postCount", "asc")
orderBy("postCount", "desc")

We can start from a given value, e.g. documents that have at least 10 posts (or more than 10 posts).

startAt(10)
startAfter(10)

pagination: cap the read, read the next page

Get at most n documents:

limit(20)

To get the next page, we provide a cutoff document (snapshot), stored from the current batch: we receive document snapshots starting beyond it:

query(colRef, startAfter(docSnapshot), limit(20))

While we can include the cutoff document in the next batch, it is mostly for other patterns:

startAt(docSnapshot)

run the query (get)

const qs = getDocs(query)
const qs = query.get()

real-time listener

Set up a real-time listener on the query: we receive a query snapshot:

const unsub = onSnapshot(query, (qs) => {
    const documents = qs.docs.map((docSnapshot) => docSnapshot.data())
    setMessages(documents)
})

Create and update documents

strict document creation

Strictly create a document with a controlled ID. The operation aborts if a document exists. (admin SDK only)

docRef.create(data)

The client SDK wants to be offline friendly. As such, It doesn't support strict document creation with a controlled ID because it requires a server roundtrip to green-light it. It does support random ID creation because the document won't exist by design:

addDoc(collectionRef, data)
db.collection("message").add(data)

To get a controlled, strict document creation, we must use a two-steps transaction where we first read then write and throw if a document exists.

upsert

An upsert works regardless if a document exists or not, and has the same result (idempotent). It is destructive, aka override any existing document: It has the effect of a creation:

setDoc(docRef, data)
docRef.set(data)

partial update

We assume the document already exists: we use the update pattern or the set merge pattern.

The update pattern is a strict update: it correctly fails if the document doesn't exist.

Both update and set merge expect a change object.

For update, the change fields replace the existing ones as-provided, the other fields are unchanged.

If we want to mutate a single property within an object field (aka mutate a sub-field), we target the sub-field directly, with a dot notation field:

const change = { displayName: "Johnny Appleseed" }
updateDoc(docRef, data)
docRef.update(data)

// sub-field
const change = { "address.city": "Lyon" }
updateDoc(docRef, data)

Note: We type the change as a Partial or a Pick of the document. If TypeScript complains about the dot notation, we use a separate version of updateDoc():

updateDoc(docRef, new FieldPath("address", "city"), "Lyon")

partial update with set

set comes with a merge option that changes its meaning: we are now providing a change object. The risk is to forget the merge option and override the document with a change object.

We provide the sub-fields we want to change. The other ones are preserved (deep merge):

const change = { address: { city: "Lyon" } } // it preserves the country field

setDoc(docRef, data, { merge: true })
docRef.set(data, { merge: true })

blind increment

We ask the server to increment the field by n, which may be negative for decrement. We skip a preemptive read since we don't care about the absolute value:

const partialUserDoc = {
    activityScore: increment(1),
}

docRef.update({
    count: FieldValue.increment(1),
})

delete field

We ask the server to delete a field. This shortcuts the need to fetch the document first and store it second omitting the given field:

updateDoc(docRef, {
    fleet: deleteField(),
})

docRef.update({
    fleet: FieldValue.delete(),
})

server timestamp field

Ask the server to generate a Firestore timestamp value.

updateDoc(docRef, {
    count: serverTimestamp(),
})

docRef.update({
    count: FieldValue.serverTimestamp(),
})

delete document

docRef.delete()
deleteDoc(docRef)

Batch writes

Instead of performing multiple individual writes, we gather them in a batch object and ask Firebase to commit all the writes at once. A single network request is sent.

It is atomic: if one write fails, the others fail as well. This prevents a broken state where only some documents are updated.

batch update from the client

Collect up to 500 writes in a batch object, then execute the batch with commit()

const batch = writeBatch(db)

batch.update(docRef1, { timezone: "Europe/London" })
batch.update(docRef2, { timezone: "Europe/London" })

await batch.commit()

batch update from the Admin SDK

In the admin SDK, we get a batch helper differently. The remaining code is the same.

const batch = db.batch()

// same code

other batch operations

batch.set(docRef, data)
batch.set(docRef, data, { merge: true })
batch.update(docRef, data)
batch.delete(docRef)
batch.create(docRef, data) // Admin SDK

Transaction

Read and write atomically with runTransaction.

The transaction guarantees that by the time we commit the write, the data on which we decided to act is still the same in the database (unchanged).

Outside a transaction, the data we read can change during the time window that separates the read hitting the database and the write hitting the database, and there is no check that prevents the write if the read data has changed.

Note: the Admin SDK locks the document during the read to write time-window, so there won't be retries. The client SDK doesn't lock the document. Instead, if data changes during the time window, a new read is done to account for the new value.

For example, if credits is positive and sufficient, we accept the purchase, but by the time we are about to commit the purchase, we want credits not to have changed since the read, otherwise we start the check process over again. This is the transaction pattern.

runTransaction

runTransaction expects a callback. transaction is a helper that holds the read and write methods (get, update, set).

Note that we await reads, but don't await writes, due to how runTransaction is implemented.

In case of failed preconditions, we abort the transaction with a throw.

Client SDK:

await runTransaction(db, async (transaction) => {
    // read
    const snapshot = await transaction.get(docRef)

    // check condition
    const currentCount = snapshot.data().count
    if (currentCount >= 10) throw Error("Sorry, event is full!") // Abort

    // proceed
    transaction.update(docRef, { count: currentCount + 1 })
})

Admin SDK:

await db.runTransaction(async (transaction) => {
    // identical API
})

Timestamp value type (advanced)

Storing dates as ISO strings is simpler to reason about and is more portable.

As the Firestore database comes with a native value type for storing dates called timestamp, we describe using this pattern in this article. The Firestore SDK comes with a Timestamp type that represents a timestamp field.

storing timestamps

As we attempt to store data, the SDK detects Date and Timestamp fields and assumes we want to store them as timestamps.

const user = {
    createdAt: new Date(),
    createdAt_: Timestamp.now(),
}

When preparing data to be transported through an HTTP request, the SDK serializes Date and Timestamp objects to objects with a single timestampValue property.

{
  "createdAt": { "timestampValue": "2025-10-07T18:47:13.279000000Z" },
  "createdAt_": { "timestampValue": "2025-10-07T18:47:13.279000000Z" }
},

The database detects this pattern and stores those field as timestamps.

receiving timestamps

Timestamp is the designed type to represent database timestamps. As we receive timestamp fields from the database, the Firestore SDK instantiates them as Timestamp objects.

Firestore Security rules

We define the security rules in the Firebase console or in a firestore.rules file. Firebase doesn't bill reads and writes denied by security rules.

rules version

rules_version = "2"

firestore scope

We start by scoping the rules to cloud.firestore

service cloud.firestore {
    // ...
    }

database scope

We scope the rules to the current database. This is boilerplate code: we don't use the database wildcard.

match /databases/{database}/documents {
    // ...
}

set rules for a given collection

We target a collection. The document ID wildcard variable holds the requested document ID. We can, for example, compare the user document's ID with the authentication data.

match /users/{user_id}{
    	// ...
}

operations and condition

allow operation, operation: if condition;

operations

read
create
update
delete

authentication, user ID

If the user is not authenticated, request.auth is null. We may filter out unauthenticated users:

allow read: if request.auth != null;

The user's authentication uid (if logged-in) is available as request.auth.uid:

request.auth.uid

Note: if auth is null, trying to read uid triggers a failsafe mechanism that denies the request.

green-light specific documents

We green light the document if its ID matches a criteria:

    match /players/{player_id} {
         allow read: if request.auth.uid == player_id;
    }

We green light the document if its field matches a criteria. resource.data represents the requested document. For example, we check the document's owner property against auth.uid.

    match /planets/{planet_id} {
         allow read: if request.auth.uid == resource.data.owner.id;
    }

If the document is missing the field, the request is denied.

get authorization information in a separate document

We read a different document with get(). It is a billed read.

get(/databases/$(database) / documents / users / $(request.auth.uid)).data.rank

This unlocks a pattern where we read some authorization data in a different document, such as the user document, which would store the user's entitlements or ranks. This may not be a good architecture.

For example, to require a specific rank:

    match /characters/{character_id} {
         allow update: if get(/databases/$(database)/documents/users/$(request.auth.uid)).data.rank == "Game Master";
    }

For example, we enforce that the requested character's zone is the same as the player's character's zone

match /overworld_characters/{overworld_character} {
     allow read: if get(/databases/$(database)/documents/characters/$(request.auth.uid)).data.zone == resource.data.zone;
}

payload validation

request.resource.data is the request's payload. We validate critical fields such as the document's owner.

request.resource.data.age > 0

  // A) user sends a post that mentions himself as uid. check the uid field.
  allow create : if request.auth.uid == request.resource.data.uid;

  // B) user modifies a post that mentions himself as uid
  // A) check the uid field.
  allow update,delete: if
  request.auth.uid == resource.data.uid
  &&
  request.auth.uid == request.resource.data.uid;

Note: We can instead forbid writes coming from the client and perform validation in a Cloud Function with TypeScript.

Storage

reference

file terminology, file patterns

Firebase Storage is a wrapper around Google's Cloud Storage, a cloud storage service. It is technically an object storage service because it stores immutable objects in a flat bucket, instead of files in a hierarchical filesystem.

Firebase Storage reintroduces the concept of files, folders and file hierarchy, primarily through the convention of using paths as object names, such as public/abc.png. The SDKs and docs use the term file instead of objects.

project's default bucket (implementation detail)

A Firebase project is given a default bucket, with a given URI. The bucket's URI serves to distinguish it from other ones. It is made of two components: a gs:// prefix and a domain name. The default bucket's domain uses the project's name, which makes it globally unique. If we add another bucket, we pick a globally unique name by ourselves:

"gs://<PROJECT-ID>.firebasestorage.app"
"gs://<PROJECT-ID>.appspot.com" // old default bucket URIs

"gs://<GLOBALLY-UNIQUE-ID>" // non-default bucket URI

The URIs are not HTTP URLs: no data is served if we force HTTP URLs out of them.

storage helper

We get a storage helper.

  • Firebase exports a storage variable so we use another name for the helper.
  • The client SDK uses the default bucket unless we specify another one in the initializer:
const storageService = getStorage(app)
const storageService = getStorage(app, "gs://...")

File references and metadata

file path

A file is uniquely identified by its path in the bucket. It includes the file extension.

file reference

We use references to interact with files. We build them with file paths:

const fileRef = ref(storage, "tts/2F14Izjv.mp3")
const fileRef = bucket.file("tts/2F14Izjv.mp3") // admin SDK

The file reference does not guarantee the file existence. The properties are of limited use:

ref.bucket // "abc.firebasestorage.app"
ref.fullPath // "tts/abc.mp3"
ref.name // "abc.mp3"

// computed references
ref.parent // ref(storage, "tts")
ref.root // ref(storage, "/")

file metadata

We fetch an existing file's metadata:

const metadata = await getMetadata(fileRef) // client SDK

It is a FullMetadata instance:

// repeat from fileRef
metadata.bucket
metadata.fullPath
metadata.name

metadata.size // 1048576 (bytes)
metadata.contentType // "audio/mpeg" (MIME type)
metadata.timeCreated // "2026-01-04T12:34:56.789Z"

metadata.ref // file reference

List files and folders

folders and prefix terminology

The API describes folders as prefixes, but the docs also mention folders.

folder existence

A file, by its name alone, creates nested folders, when its name contains subpaths. For example, abc/def/hello.pdf creates two folders: abc and def. Those folders do not exist on their own: they are an artificial byproduct.

By design, those folders can't be empty, because they derive from a nested file.

use a folder reference to list its content

We build a reference to a folder to list its content. It is a shallow list: we see the top level files and folders.

The list discriminates files (items) from folders (prefixes), putting them into separate arrays. The item itself is exposed as a reference (StorageReference) regardless if it is a file or a folder.

Note: list() is a capped version that expects a count limit.

folderRef = ref(storage, "uploads")

const result = await list(directoryRef, { maxResults: 100 })
// const result = await listAll(directoryRef)

result.items // StorageReference[]
result.prefixes // StorageReference[]

Read, download files

general considerations

  • The client SDK is subject to access-rules. Some functions allow the user, after access control, to save a bearer URL which is not subject to security rules (one-off access control).
  • Download workflows are influenced by the browser restrictions.

get a HTTP URL on the client

We request a read URL. Access control is performed when requesting the URL.

The returned URL is a bearer URL, which is not subject to access-control. We consume it outside the realm of the Storage SDK, as a regular URL.

Note: the URL remains valid unless manually revoked at the file level in the Firebase Console.

getDownloadURL(fileRef).then(url => ...)

consume a cross-origin HTTP URL on the client.

The URL is cross-origin. The challenges and patterns to consume a cross-origin URL are not specific to Firebase.

The way we consume the URL determines if CORS headers are necessary.

  • The browser allows cross-origin URLs in media elements' src attribute (hot linking), with no CORS headers required.
  • The browser allows navigating to cross-origin URLs (basic browser behavior). For example, we navigate to an image in a new tab.
  • The browser doesn't allow background fetch of cross-origin resources unless explicit CORS headers are present on the server. This applies to fetch() and functions that rely on it.

Buckets do not have permissive CORS headers by default, but we can add them on demand. CORS headers whitelist one, several or all domains. We use gsutil or gcloud to whitelist our domain (see the dedicated chapter).

download a Blob with the client SDK

A blob is an opaque object that we fetch and transform to a local URL for easier saving. When using getBlob():

  • access rules are enforced
  • CORS headers are required (it uses fetch() under the hood)

We create a local (same-origin) URL out of the blob, to avoid the browser restrictions against cross-origin_ URLs. It restores the ability to download content through a single click, without navigating to a different URL (see below).

getBlob(fileRef).then((blob) => {
    // create a local URL and trigger download imperatively
})

add the download attribute to an anchor tag, guard against cross origin URLs

The download attribute on anchor tags (<a href="" download>) offers one-click downloads for same-origin URLs or local URLs.

For cross-origin URLs, clicking the anchor tag triggers standard browser navigation instead: the browser navigates to the resource and shows its full URL.

create a local URL out of a blob (browser specific)

This example also triggers download programmatically, and revokes the local URL for clean up. We set download to the file name.

// 3. Create a local URL out of the blob
const objectURL = URL.createObjectURL(blob)

// 4. Use the local URL to trigger the download
const link = document.createElement("a")
link.href = objectURL
link.download = img.id + ".png"
document.body.appendChild(link)
link.click()
document.body.removeChild(link)

// 5. Clean up by revoking the local URL
URL.revokeObjectURL(objectURL)

Upload data

client SDK

upload a Blob or a File

We prepare some data in a JavaScript Blob or File object, and upload it to the reference.

const result = await uploadBytes(fileRef, file)
  • The upload is a non-conditional upsert which overrides existing files.
  • It makes the file immediately downloadable through the SDK read functions.
  • On success, we receive an UploadResult, which wraps the bucket file's metadata and the file reference.
result.metadata // FullMetadata
result.ref

(advanced) upload and track the progress

For each tick, we receive a snapshot. We may show the upload progress.

const uploadTask = uploadBytesResumable(ref, file)

uploadTask.on(
    "state_changed",
    /* on snapshot */
    function (snapshot) {
        // snapshot.bytesTransferred
        // snapshot.totalBytes
        // snapshot.state // "paused" | "running"
    },
    function (error) {},
    function () {
        /* on completion */
        getDownloadURL(uploadTask.snapshot.ref).then(/**/)
    }
)

admin SDK

upload a Node.js Buffer and make it downloadable

We prepare some data in a Node.js Buffer, and upload it to the reference.

await fileRef.save(imageBuffer, {
    resumable: false,
    metadata: {
        contentType: `image/png`,
        cacheControl: "public, max-age=31536000, immutable",
    },
})

Note: it doesn't make the file downloadable for clients: a client getDownloadURL() fails. This is because the underlying GC storage object is missing a Firebase-specific download token in its metadata.

To make it downloadable for clients, we use the admin SDK's getDownloadURL(). It adds a permanent download token to the underlying GC storage. It also returns the bearer URL (tokenized URL that embeds this very access token, and is not subject to security rules).

We can store it in a database, return it to the client, or discard it and let the client SDK generates the URL on its own with getDownloadURL() (since it is now downloadable).

const url = await getDownloadURL(fileRef)

We can invalidate the access token from the Firebase console. It makes the file non-downloadable. The bearer URL becomes invalid.

advanced: read and write the token

The token, if present, is in the File's metadata field. We should avoid setting this field manually when using save(). We use getDownloadURL instead (see full example below).

metadata: {
  firebaseStorageDownloadTokens: token
}

upload image example (admin SDK)

We upload an image and make it readable by clients. We may store the bypass URL.

// 1.0 create a file reference
const fileRef = bucket.file(`generated/${userID}/cat.png`)

// 1.1 create a Buffer object
const imageBuffer = base64ToBuffer(base64Data)

// 1.2 upload the Buffer object
await fileRef.save(imageBuffer, {
    resumable: false,
    metadata: {
        contentType: `image/png`,
        cacheControl: "public, max-age=31536000, immutable",
    },
})
//  1.3 make it readable by client SDKs (generate a token).
const url = await getDownloadURL(fileRef)

//  1.4 store the bypass URL (if applicable)
//  ...

Setting the bucket CORS header

Some read operations require the client's domain to be whitelisted by a CORS header. We add authorized domains to cors.json and send it to Google through the CLI:

cors.json

[
    {
        "origin": ["https://imagetales.io", "http://localhost:5173"],
        "method": ["GET"],
        "maxAgeSeconds": 3600
    }
]

Register cors.json:

gcloud storage buckets update gs://abc.firebasestorage.app --cors-file=cors.json

(Debug) Describe the existing bucket CORS config:

gcloud storage buckets describe gs://abc.firebasestorage.app --format="default(cors_config)"

read operations that require a CORS whitelist

Browser reads relying on background fetch rather than navigating to the URL require a CORS whitelist:

  • getBlob(fileRef) to get a Blob, which uses fetch() under the hood.
  • getBytes(fileRef) to get an ArrayBuffer, which uses fetch() under the hood.
  • using fetch() manually with a bearer (tokenized) URL.

Cloud Functions

Cloud Functions are serverless functions: we run code on servers operated by Google.

As it is a secure environment, we run sensitive tasks: authenticate requests, perform server-side validation, use API keys, make sensitive writes to the database, and more.

Functions trigger on spontaneous requests, or on events happening in the Firebase ecosystem, such as the registration of new users through Firebase Auth.

react to spontaneous requests: two options

The first option is to establish a bare-bones REST-API endpoint, called a HTTP function. It exposes a regular REST API endpoint, with an Express.js like API.

The second option is to establish a Callable function, a pattern that involves both a server SDK and a client SDK, which work hand in hand to provide a better developer experience, such as managing authentication.

onRequest and onCall are the two helpers to define those function:

import { onRequest, onCall } from "firebase-functions/https"

select and deploy functions

The main file, through the functions that it exports, determines the functions to be deployed. The main file is the one we set in package.json. It must be a JavaScript file:

{
    "main": "lib/index.js"
}

It is usually a barrel file that re-exports functions from their own files:

export { requestPlayer } from "./requestPlayer.js"

We deploy functions imperatively, all or a few:

firebase deploy --only functions
firebase deploy --only functions:requestPlayer
firebase deploy --only functions:requestPlayer,functions:requestPlanet

To delete a function, we remove it from the main file and run its deploy command. The CLI detects its absence and prompts us for confirmation.

define functions with TypeScript

We use a workflow that transpiles to JS since the main file must be JavaScript. The convention is to store TypeScript code in src/ and transpile towards lib/. The main file is lib/index.js.

tsconfig.json configures the transpilation, targeting the Node.js runtime:

{
    "compilerOptions": {
        "module": "NodeNext",
        "moduleResolution": "nodenext",
        "outDir": "lib",
        "esModuleInterop": true,
        "noImplicitReturns": true,
        "noUnusedLocals": true,
        "sourceMap": true,
        "strict": true,
        "target": "es2020"
    },
    "compileOnSave": true,
    "include": ["src"]
}

We ask the transpile to be continuous with the watch flag. Then, the emulator detects changes in the generated JS files, and updates the emulator services:

tsc -w

admin SDK

Within cloud functions, we interact with other Firebase services through the admin SDK. For example, we work with the project's Firestore database:

import { initializeApp } from "firebase-admin/app"
import { getFirestore } from "firebase-admin/firestore"

const app = initializeApp()
const db = getFirestore(app)

Define Callable functions

The code we run in a Callable function has access to the user authentication status and the request's data.

Callable functions support streaming the response: we describe the pattern in a dedicated section.

Overview and syntax

synopsis

onCall<ReqData, Promise<ResData>>(callback)
onCall<ReqData, Promise<ResData>>(options, callback)

the callback

The callback has access to the request object (CallableRequest), which exposes auth and data.

We define the callback async so it returns a promise. The connection is kept open until the promise settles.

onCall<ReqData, Promise<ResData>>(async (request) => {
    request.auth // AuthData | undefined
    request.auth?.uid

    request.data // ReqData

    return { message: ".." } // ResData
})
  • auth is undefined when the request is unauthenticated. It has uid otherwise.
  • ReqData defines the data sent by clients.
  • ResData defines what the callback returns.

add options

onCall accepts an optional options object, of type CallableOptions, a subclass of GlobalOptions, as the first argument.

const options: CallableOptions = {
    concurrency: 1,
    minInstances: 1,
    maxInstances: 1,
    region: "europe-west1",
}

concurrency sets how many requests a single instance processes in parallel. By default, an instance processes multiple requests in parallel. We set it to 1 for sequential processing, assuming we also set maxInstances to 1.

minInstances default to 0. To avoid cold starts, we can set minInstances to 1 but it costs more because it is kept warm.

We can limit maxInstances to 1.

Streaming version

Streaming the response means to send small chunks of data with sendChunk().

The third type argument (StreamData) defines what kind of chunk we stream. We usually stream string chunks.

The request exposes acceptsStreaming, which we read to check if the client supports streaming. When it does, the callback has access to an extra response argument, on which we call sendChunk().

onCall<T, U, V>(options, callback) // streaming Callable
onCall<ReqData, Promise<ResData>, StreamData>(async (request, response) => {
    if (response.acceptsStreaming) {
        response?.sendChunk("abc") // StreamData
        response?.sendChunk("def")
    } else return { message: ".." } // fallback
})

Patterns

halt and send an error immediately

We throw an HttpsError with a specific error code which conforms to a predefined list. It defaults to internal error if omitted.

throw new HttpsError("unauthenticated", "unauthenticated")

logger

logger.debug("")
logger.info("")
logger.warn("")
logger.error("")

Callable v1 (deprecated)

define the function

functions.https.onCall(async (data, context) => {
    const auth = context.auth
    const message = data.message
    return { message: ".." }
})

the context object

The context object provides the authentication details, if any, such as the email, and the request metadata such as the IP address, or the raw HTTP request. It is of type CallableContext

check authentication

if (!context.auth) {
    throw functions.https.HttpsError("unauthenticated", "you must be authenticated")
}

Invoke Callable functions

We get a reference to the callable function, and call it like a regular function.

get a functions helper: set the firebase project and the region

Since a client can interact with Cloud Functions from separate Firebase projects, we specify the project we target. We do so indirectly, by providing the app helper, which already identifies the project.

Since a cloud function can live across regions as separate regional instances, we specify the region we target. We use one of the regional identifiers that we set in the Callable options. If omitted, the client SDK targets us-central1, which errors if no instance runs there.

const functions = getFunctions(app, "europe-west1")

get a handle over the Callable function

We also provide the type arguments:

const requestPokemonCF = httpsCallable<ReqData, ResData>(functions, "requestPokemon")

invoke and handle the result

The payload, if any, is of type ReqData. The returned value is of type HttpsCallableResult<ResData>. We read the data property:

const result = await requestPokemonCF({ number: 151 })
result.data // ResData

HTTP functions

overview

Establish a bare-bones REST-API endpoint, called a HTTP function. We expose a regular REST API endpoint.

We respond with JSON, HTML, or plain text:

export const sayHello = onRequest((req, res) => {
    res.send("Hello from Firebase!")
})

add options

const options = {
    region: "europe-west1",
    cors: true,
}
export const sayHello = onRequest(options, (req, res) => {})

ExpressJS concepts and syntax

Req and res objects have the shape of expressJS req and res objects. We can add middleware.

call the endpoint: standard HTTP request (not firebase specific)

We read the function's URL at deploy time.

We consume endpoints like regular REST API endpoints. They URL that looks like this:

https://requestPlanet-x82jak2-ew.a.run.app

Run functions on Auth events

Register functions that listen and react to Firebase Auth events.

Blocking functions

run a function before the user is added to Firebase Auth

The function is blocking: We perform validation, and, if applicable, throw an error to deny the registration. Firebase Auth aborts user creation on throw. The Auth client SDK receives such error and can display it to the user"

export const onRegisterBlocking = beforeUserCreated(options, async (event) => {
    const user = event.data // AuthUserRecord === UserRecord
    // user.uid
    // user.email
    if (user?.email?.includes("@hotmail.com")) {
        throw new HttpsError("invalid-argument", "don't use hotmail")
    }
    // create the user in the database first, then return
    await createDefaultDataForUser(user)
    return
})

Non blocking functions

The non blocking functions run after a user has been created (or deleted) by Firebase Auth.

As of writing, there is no v2 version for the non blocking functions.

export const f = auth.user().onCreate(async (user) => {})
export const g = auth.user().onDelete(async (user) => {})

example: add the user to the Firestore database

We read the auth user's uid and create a user document with it:

export const onRegisterNonBlocking = region("europe-west1")
    .auth.user()
    .onCreate(async (user) => {
        const { uid, email } = user
        await db.collection("users").doc(uid).set({ uid, email })
    })

example: delete the user from the Firestore database

export const onDeleteAccount = region("europe-west1")
    .auth.user()
    .onDelete(async function (user) {
        const { uid } = user
        await db.doc("users/" + uid).delete()
    })

on Firestore and Storage events

on Firestore events

Run Cloud functions on database events. They are non-blocking: they run after writes. We use the term sanitization instead of validation, since they don't prevent writes.

sanitize data post-write

v2

export const onUserWritten = onDocumentWritten("users/{docId}", (event) => {
    const change = event.data
    const docId = event.params.docId

    const before = change.before.data()
    const after = change.after.data()
})

on Storage events

sanitize data post-upload

the user uploads a file to Firebase Storage. Sanitize data post-upload. For example:

exports.generateThumbnail = functions.storage.object().onFinalize(async (file) => {
    const fileBucket = file.bucket
    const filePath = file.name
    const contentType = file.contentType
    const metageneration = file.metageneration
    // Number of times metadata has been generated. New objects have a value of 1.
})

Create a thumbnail for an uploaded image.

JS Dates and Callable Functions

ISO strings are the better choice

When interacting with Callable Functions, it's best to represent dates as ISO strings. It is simple to reason about: the value and the type stay consistent on the client and on the server.

If we were to work with Date fields or Firestore Timestamps fields, the values are not consistent when sending to the server and when receiving from the server.

In this article, we explain what happens when we send Date and Timestamp objects to Callable Functions or when we receive them from Callable Functions. Before being sent, both are serialized to JSON.

sending to Callable Functions

Timestamp is a Firestore specific type and doesn't get a special treatment: it serializes to an object with seconds and nanoseconds (through toJSON()).

timestamp: { seconds: 1696751687, nanoseconds: 527000000 },

As for fields of type Date, they serialize to an ISO string (through toJSON()):

date: "2023-10-08T07:54:47.527Z"

We could technically instantiate a Timestamp or a Date:

new Timestamp(timestamp.seconds, timestamp.nanoseconds)
new Date(date)

sending from Callable functions

If we attempt to return a Date object, it serializes to an ISO string.

If we attempt to return a Timestamp object, it serializes to the internal representation, possible an object with _seconds and _nanoseconds. We should avoid this pattern.

Environment variables

firebase secrets pattern

We provide secrets through the CLI tool:

firebase functions:secrets:set ABC_API_KEY

We then set a secret whitelist for each function, allowing it to access the given secrets:

const options: CallableOptions = {
    region: "europe-west1",
    secrets: ["ABC_API_KEY"],
}

onCall<ReqData, Promise<ResData>>(options, async (request) => {
    const abcKey = process.env.ABC_API_KEY
})
const options = { secrets: ["ABC_API_KEY"] }

onRequest(options, (req, res) => {
    const abcKey = process.env.ABC_API_KEY
})

debug secrets

We list the project's secrets:

gcloud secrets list --project <PROJECT_ID>

.env file pattern

The .env file pattern is worse. It is fine for local debugging. We set the env variables in a non versioned .env file.

ABC_API_KEY=xxx

On deploy, the .env file is automatically detected and deployed along functions. See env-variables docs

The way to access is the same: we read process.env within cloud functions

process.env

Debug Functions locally

start the functions emulator

We run the functions on their own (serve), or along other emulated services.

npm run serve
firebase emulators:start --only functions
firebase emulators:start --import emulator-data --export-on-exit

Callable functions are designed to be called from the client SDK. We can bypass this requirement locally:

invoke callable functions outside the client SDK

functions:shell starts the functions emulator and starts an interactive CLI shell from which we invoke callable functions with a payload.

firebase functions:shell
npm run shell # alternative

We provide the mandatory data property. It holds the payload:

requestArticles({ data: { name: "Lena" } })

We can also invoke them with curl

curl -s -H "Content-Type: application/json" \
  -d '{ "data": { } }' \
  http://localhost:5001/imagetale/europe-west1/get_images

wire the client to the emulator

We redirect invocations towards the emulated functions, but only on localhost:

if (location.hostname === "localhost") {
    // ...
    connectFunctionsEmulator(functions, "localhost", 5001)
}

invoke emulated HTTP functions

We invoke HTTP functions with a HTTP request. The URL pattern is specific to the emulator.

http://localhost:5001/imagetale/europe-west1/get_images

Schedule execution: Cron jobs

schedule periodic code execution

To define a schedule, we set both the periodicity and the timezone. To set the periodicity, we use strings such as every day 00:00 or every 8 hours. Then we add the callback function.

export const updateRankingsCRON = onSchedule(
    {
        schedule: "every day 00:00",
        timeZone: "Europe/London",
        region: "europe-west1",
    },
    async () => {
        // ...
    }
)

The former version (v1) uses a different API:

export const updateRankingsCRON = functions.pubsub
    .schedule("every 8 hours")
    .timeZone("Europe/London")
    .onRun(async (context) => {
        // ..
    })
earlymorning logo

© Antoine Weber 2026 - All rights reserved

Overview

Firebase allows developers to create a fully featured backend on top of servers and APIs operated by Google.

overall benefits

  • solid developer experience
  • it scales to world-scale use
  • generous free-tier and pay-as-you-go model
  • high quality docs, well supported by AI models
  • actively developed and maintained

main backend components covered in this document

  • authentication with Firebase Auth
  • database with Cloud Firestore
  • storage with Cloud Storage
  • serverless functions with Cloud Functions

focus of this document: web-centric

We create a backend for web-apps, and use the web-centric client SDKs. We default to TypeScript, and pick Node.js as the runtime for Cloud Functions.

CLI tool

The Firebase CLI tool enables several workflows:

  • Emulate the Firebase backend locally, to run it and debug it at no cost.
  • Scaffold the Cloud Functions' directory, and deploy Cloud Functions.
  • Submit secrets or API keys to Google, to make them available in Cloud Functions.
  • Add and deploy security rules.
  • List the Firebase projects linked to the Google account.

the CLI executable

The firebase-tools npm package provides the firebase CLI executable.

npm install -g firebase-tools
firebase

Release notes

underlying Google account

Firebase projects are linked to a Google account.

firebase login:list # prints current Google account
firebase login
firebase logout

list projects and select one

firebase projects:list
firebase use imagetales

project configuration and scaffolding

The init command enables several workflows, such as:

  • scaffold the Cloud Functions directory
  • set up and configure emulators
  • add security rules for Firestore and Cloud Storage
firebase init

help

  • print the list of Firebase commands.
  • print the details about a given command.
firebase help

firebase help emulators:start
firebase help deploy

list deployed functions, deploy functions

firebase functions:list

firebase deploy --only functions
firebase deploy --only functions:requestPlanet

manage secrets

firebase functions:secrets:access ABC_API_KEY
firebase functions:secrets:set ABC_API_KEY
firebase functions:secrets:destroy ABC_API_KEY

start and config emulators

firebase emulators:start
firebase emulators:start --import emulator-data --export-on-exit

We specify which emulators to run with firebase.json. We set the port or rely on the default one if omitted. We scaffold this file with firebase init.

{
    "emulators": {
        "firestore": { "port": 8080 },
        "auth": { "port": 9099 },
        "functions": { "port": 5001 },
        "storage": { "port": 9199 },
        "ui": { "enabled": true }
    },
    "storage": { "rules": "storage.rules" },
    "firestore": {
        "rules": "firestore.rules",
        "indexes": "firestore.indexes.json"
    },
    "functions": [
        /* ... */
    ]
}

deploy security rules

The storage emulator requires storage access rules.

  • We define Storage rules in storage.rules.
  • We define Firestore rules in firestore.rules
firebase deploy --only storage
firebase deploy --only firestore:rules

gcloud: Google Cloud CLI tool

gcloud enables some operations not available with the firebase tool, such as listing secrets of a given project or describing a Storage bucket.

We call gcloud from the Google Cloud Console's Cloud Shell (it is pre-installed), or we install it locally from an archive provided by Google.

gcloud secrets list --project <PROJECT_ID>
gcloud storage buckets describe gs://abcd.firebasestorage.app

SDKs

Interact with the backend with the help of SDKs. We use Javascript SDKs.

client SDKs

The client SDKs run on unprivileged clients, such as browsers. It can also run in a Node.js app that wants to act as an (unprivileged) client.

npm i firebase

admin SDK: privileged environments

The admin SDK is designed to run on secure, privileged environments.

The admin SDK authenticates itself against Google servers by using a privileged account called a service account. Service accounts are automatically created by Google, scoped to a Firebase project and have specific entitlements. The admin SDK skips user-centric authentication and is not subject to security rules (which are designed to control untrusted requests).

We primarily use the admin SDK within Cloud Functions, an environment pre-configured by Google with the appropriate service account. The admin SDK detects it and uses it.

We use the Node.js admin SDK:

npm i firebase-admin

Cloud Functions SDK

We define Cloud Functions with the (Node.js) Cloud Functions SDK.

We have the package listed as a dependency after scaffolding the Cloud Functions directory with firebase init.

"firebase-functions": "^7.0.0",

Project setup and initialization

identify the Firebase project (client SDK)

The config object stores credentials to identify the Firebase project when interacting with Google servers. These credentials are not sensitive or confidential per se since they only serve to identify the project, and they are exposed on the client.

const firebaseConfig = {
    apiKey: "....",
    authDomain: ".....firebaseapp.com",
    projectId: "....",
    storageBucket: ".....firebasestorage.app",
    messagingSenderId: "....",
    appId: "....",
}

register one or more configs

We give the config to the client SDK. It returns a helper object that we initialize other services with.

const app = initializeApp(firebaseConfig)

When working with several Firebase projects, we get a helper for each project. The first helper has a "[DEFAULT]" internal string identifier. We must provide a string identifier for additional project we want to work with.

const app1 = initializeApp(firebaseConfig1)
const app2 = initializeApp(firebaseConfig2, "two")

When initializing the admin SDK from Cloud Functions, the environment is automatically configured: we don't have a config object at all, and we get a helper config-less.

const app = initializeApp()

Auth Overview

authenticate app users

The Auth client SDK authenticates users and notify the app about Auth events. It provides several authentication flows.

auth helper and reading currentUser across the app

We keep a reference to the auth helper to read currentUser. We also provide this helper when using some auth related functions.

const auth = getAuth(app)
auth.currentUser // User | null

currentUser starts as null. When the SDK has finished loading, and given that the user has logged-in, currentUser switches to a User instance.

As a User instance, It holds the user unique identifier (uid). Other properties may be empty:

currentUser.uid
currentUser.email
currentUser.phoneNumber
currentUser.displayName
currentUser.isAnonymous

react to authentication events

We register a callback on onAuthStateChanged, which Firebase runs on auth events. Firebase gives us a user object (of type User | null).

onAuthStateChanged(auth, (user) => {
    if (user) {
        // user.uid
    }
})

Auth events:

  • the auth SDK has finished loading and no user is authenticated

  • the user has registered (sign up)

  • the user has logged in (sign in)

  • the user has logged out (sign out)

Login occurs in three specific scenarios:

  • the user fills the standard login form or logs in through an identity provider (hard-login)
  • the user is recognized by the SDK and is logged in automatically (credentials stored in browser)
  • (canonically a registration) the user is automatically logged-in after a successful sign-up. Note: a single authentication event occurs.

React patterns

We make the authentication status part of the React state. For example, we work with a isSignedIn variable. We make the display of the authenticated area conditional on isSignedIn being true.

On page load, the Auth SDK is loading: If we initialize isSignedIn to false, it may not reflect the Auth reality, and may instantly switch to true once the SDK is loaded, which triggers a UI flicker.

It's best to wait for the SDK to load before making any use of isSignedIn. As such, we track the loading state in a one-off state variable, which becomes true on the first authentication event. Only then do we read isSignedIn.

const [hasLoaded, setHasLoaded] = useState(false)
const [isSignedIn, setisSignedIn] = useState(false)

useEffect(() => {
    const unsub = onAuthStateChanged(auth, (user) => {
        setHasLoaded(true)
        setisSignedIn(Boolean(user))
    })
    return unsub
}, []) // subscribe once, subscribe automatically.

if (!hasLoaded) return null
if (!isSignedIn) return <Lobby />
return <Ingame />

sign out

sign out is consistent across all authentication flows:

signOut(auth)

Email-Password accounts

A provider that relies on collecting the user's email and password.

registration and hard-login

register:

createUserWithEmailAndPassword(auth, email, password).then((credential) => {
    credential.user // User
})

hard login:

signInWithEmailAndPassword(auth, email, password).then((credential) => {
    credential.user // User
})

send a password reset email

We ask Firebase to send a password-reset email to the provided email. We can customize the email content through the Firebase console:

sendPasswordResetEmail(auth, email)

email account's providerData (implementation detail)

Note: password is the providerId value for the email-password provider.

{
    "providerData": [
        {
            "providerId": "password",
            "uid": "user@example.com",
            "email": "user@example.com",
            "displayName": null,
            "phoneNumber": null,
            "photoURL": null
        }
    ]
}

Identity Providers

We allow users to authenticate with an external provider account, such as a Google account or an Apple account.

select one or several providers

Note: We enable providers in the Firebase console.

const gProvider = new GoogleAuthProvider() // Google Provider

authentication flows

Possible flows:

  • the user authenticates through a popup window.
  • the user authenticates through a redirect.

Flows handle both sign-in and sign-up: we describe a flow with a generic control label:

  • "Authenticate with Foo"
  • "Continue with Foo"

Both flows trigger an authentication event on success. They return a credential (UserCredential), that embeds the user object:

const credential = await signInWithPopup(auth, gProvider)
credential.user // User

Note: We can detect it is a new user through a helper method:

const userInfo = getAdditionalUserInfo(credential)
if (userInfo?.isNewUser) {
}

popup flow

The popup flow may fail if the browser doesn't allow popups.

const credential = await signInWithPopup(auth, gProvider)

redirect flow

The redirect flow relies on navigating to another page and navigating back.

It requires extra work unless the website is hosted on Firebase Hosting.

Anonymous account

Register an account with no personal information from the user.

signInAnonymously(auth)

The generated credentials are stored in the browser: the user cannot access the account from another device, and cannot recover the account if credentials are lost.

The creation of an anonymous account is partially supported by Auth-triggered Cloud Functions:

  • it triggers the v1's user().onCreate() cloud function.
  • it doesn't trigger the blocking beforeUserCreated() cloud function (as of now).

check if the account is anonymous

On the client, we check isAnonymous:

auth.currentUser?.isAnonymous // true for anonymous accounts

In auth-triggered Cloud Functions, we read providerData (from the UserRecord).

export const onRegisterNonBlocking = auth.user().onCreate(async (userRecord) => {
    userRecord.providerData.length === 0 // true for anonymous accounts
})

convert to a non-anonymous account

We link to another provider. Since the user already exists (currentUser), we provide it to the link function.

Link to an email credential, after collecting the email address and password:

const emailCred = EmailAuthProvider.credential(email, password)
await linkWithCredential(auth.currentUser, emailCred)

Link to an identity provider, with a popup:

const gProvider = new GoogleAuthProvider()
const result = await linkWithPopup(auth.currentUser, gProvider)

Manage users

We manage users with the Auth Admin-SDK:

import { getAuth } from "firebase-admin/auth"
const auth = getAuth()

list users

listUsers() fetches at most 1000 users at once. If we have more users, we use pagination.

const result = await auth.listUsers() // implied 1000 max
const users = result.users

users.forEach((user) => {
    user // UserRecord

    user.uid
    user.email

    // HTTP-date string (RFC 1123)
    user.metadata.creationTime // "Tue, 13 Jun 2023 17:00:00 GMT"
    user.metadata.lastSignInTime // "Wed, 14 Jun 2023 17:00:00 GMT"
})

Firestore

conceptual

Firestore is a database made of schema-less collections and documents. It is a NoSQL database that is most similar to MongoDB.

A collection is a set of documents.

A document is a set of fields holding primitive data types (number, string, timestamps...). A document has up to 20k fields and stores up to 1 MiB of data.

A reference serves to identify a collection or a document in the database. It doesn't guarantee the collection or document existence: It's merely a path that may point to nothing.

firestore reference

firebase-admin is a wrapper around @google-cloud/firestore. It has the same syntax and capabilities.

import paths

"firebase/firestore" // client SDK
"firebase/firestore/lite" // client SDK

"firebase-admin/firestore" // admin SDK

helper object

We init a db object, for use in Firestore-related functions.

// const app = initializeApp()
const db = getFirestore(app)

Collection

Collection Reference

use the collection reference

We use the collection reference to:

  • fetch all documents (it acts as a query): getDocs(colRef)

  • build a query targeting the collection: query(colRef, filters..)

  • build a document reference (random-ID): doc(colRef), or one that refers to a specific document: doc(colRef, docId)

  • add a document to the collection, (random ID, generated on the fly): addDoc(colRef, data).

build a collection reference

We use a path to identify the collection (uniquely). Root collections have the simplest path, such as "users" (no starting slash). Sub-collection paths are built from several components.

We set the path as:

  • a single string, with slash separators.

  • a sequence of strings, with no slash separators.

const colRef = collection(db, "users")
const colRef = collection(db, `users/${uid}/custom_list`)
const colRef = collection(db, "users", uid, "custom_list")
const colRef = db.collection(`users/${uid}/custom_list`) // sane

TypeScript: set the document's type at the collection level.

Collections are schema-less: they don't define the shape of their documents.

When receiving document data from the database, the client SDK checks the actual data and instantiates documents with it. The instantiated documents are of any shape and may differ from one another.

The instantiated documents are typed as DocumentData, which is a loose type that doesn't provide information about the content.

We provide a more precise type at the collection reference level. We do it through a type assertion:

const colRef = collection(db, "players") as CollectionReference<Player, Player>

Instantiated documents are now of type Player.

Converter

The SDK supports having two document shapes on the client:

CollectionReference<AppModelType, DbModelType>

DbModel is the representation of the received data, aka the object that the SDK instantiates as a direct translation of the received data, with no transformation. It is DocumentData by default.

We can add a converter to transform it into a different shape for use in the app.

AppModel represents the object as it is after the converter's transformation. It also defaults to DocumentData. We set it to whatever type the converter converts to.

Before sending to Firestore, the converter transforms back AppModel to DbModel.

Transformation examples:

  • We transform the DbModel's Timestamp field to an AppModel Date field.
  • We add properties to AppModel.

implement the converter

We transform the documents at the app boundaries:

  • upon receiving from Firestore (fromFirestore())
  • upon sending to Firestore (toFirestore())

We define the functions and add them to the converter.

fromFirestore() takes the snapshot as instantiated:

fromFirestore(snapshot: QueryDocumentSnapshot<FirestoreWorkout>): Workout{
		// to client shape
		const firestoreWorkout = snapshot.data()
		const workout = { ...firestoreItem, date: firestoreItem.date.toDate()}
     return workout
    }

toFirestore() takes the object in its app-side shape.

toFirestore(workout: Workout) {
		// to database shape
        	return { ...workout, date: Timestamp.fromDate(workout.date)}
    }

We gather the transforms in the converter (FirestoreDataConverter). While the type may be inferred from the transforms, we may still add them for safety.

// FirestoreDataConverter<AppModel, DbModel>
const myConverter: FirestoreDataConverter<Workout, FirestoreWorkout> = {
    toFirestore() {},
    fromFirestore() {},
}

We attach it to the collection reference to let it type its documents.

const colRef = collection(db, "players").withConverter(conv)

Document

Document reference

The document reference identifies a document within the database, and embeds meta information:

docRef.id // "Nk....WQ"
docRef.path // "users/Nk....WQ"
docRef.parent // colRef

use the document reference

We use the reference for most CRUD operations:

  • read the document: getDoc

  • update an existing document (it errors if the document doesn't exist): updateDoc

  • delete the document: deleteDoc

  • create the document, or override an existing one (upsert): setDoc

build a document reference

The document's path identifies it uniquely. We set the path as a single string or build it from string components.

const docRef = doc(db, "users", id)
const docRef = doc(db, "users/Nk....WQ")

const docRef = collectionRef.doc("NkJz11WQ") // admin sdk

Alternatively, we provide the collectionRef and the document ID. If we omit the ID, the SDK generates a random one.

const docRef = doc(collectionRef, id)
const docRef = doc(collectionRef) // randomized ID

read document at reference (get)

The get operation succeeds even if no document exists: Checking for a document existence is a valid read.

The function returns a Document snapshot, which may be empty:

getDoc(docRef) // DocumentSnapshot
docRef.get() // DocumentSnapshot

Document snapshot

The Document snapshot is a wrapper that doesn't guarantee the document existence. It exposes the document (or its absence) via a getter. Unless we provide a more specific type, the document's type is DocumentData.

Note: data() is a function because it accepts some configuration.

docSnapshot.exists()
docSnapshot.data() // DocumentData | undefined

It also exposes helpers and metadata.

docSnapshot.id // NkJ...7f
docSnapshot.ref // DocumentReference
docSnapshot.metadata // SnapshotMetadata

Query a specific field

docSnapshot.get("address.zipCode") // low use

real-time listener

Set up a real-time listener on a document reference:

const unsub = onSnapshot(docRef, (docSnapshot) => {
    docSnapshot.data() // DocumentData | undefined
})

Query

overview

A query matches documents based on a set of criteria, instead of pre-defined document references.

the result of a query: a query snapshot

The query snapshot hosts the list of document snapshots (docs). The list is empty when no match occurred.

The document snapshots are of type QueryDocumentSnapshot (not DocumentSnapshot) but they have the same API surface. They are guaranteed to have an underlying document at snapshot.data() (this is the difference).

querySnaptshot.docs // list of document snapshots
querySnaptshot.empty
const cats = querySnapshot.docs.map((snap) => snap.data())

a collection reference is a query

A collection ref is technically a query and is used to target all documents in a read (get):

getDocs(q)
getDocs(colRef)

q.get()
colRef.get()

build a query

We add value-based filters, set the order and limit the count:

const q = query(colRef, where(..), where(..), orderBy(..), limit(..))
const q = collection(..).where(..).orderBy(..).limit(..)

where filter: look for documents with a given value

We filter documents based on a value we want to find in a property. We request an exact value or one within a range. Depending on the data, we expect a single match at most or several.

Note: documents that do not possess the property are filtered out.

For example, we look for the document whose id is of value user.id.

where(propertyName, operator, value)
where("id", "==", user.id)

set the requirement for the value: exact match, being different, being smaller or larger, exact match with at least one value, or different from all values.

==
!=

<
<=
>
>=

"in" // the property is equal to either A, B or C
"not-in" // the property is different from A, B and C.

We can also ask the value to be included or excluded from the array if the property is an array.

"array-contains" // the array contains this value
"array-contains-any" // the array contains A, B or C..

order documents based on one field

We order documents based on the value of a given field. By default, it sorts documents so that the value is ascending. It's best to set the order explicitly rather than relying on the default ascending order.

orderBy(propertyName, orderDirection)
orderBy("postCount", "asc")
orderBy("postCount", "desc")

We can start from a given value, e.g. documents that have at least 10 posts (or more than 10 posts).

startAt(10)
startAfter(10)

pagination: cap the read, read the next page

Get at most n documents:

limit(20)

To get the next page, we provide a cutoff document (snapshot), stored from the current batch: we receive document snapshots starting beyond it:

query(colRef, startAfter(docSnapshot), limit(20))

While we can include the cutoff document in the next batch, it is mostly for other patterns:

startAt(docSnapshot)

run the query (get)

const qs = getDocs(query)
const qs = query.get()

real-time listener

Set up a real-time listener on the query: we receive a query snapshot:

const unsub = onSnapshot(query, (qs) => {
    const documents = qs.docs.map((docSnapshot) => docSnapshot.data())
    setMessages(documents)
})

Create and update documents

strict document creation

Strictly create a document with a controlled ID. The operation aborts if a document exists. (admin SDK only)

docRef.create(data)

The client SDK wants to be offline friendly. As such, It doesn't support strict document creation with a controlled ID because it requires a server roundtrip to green-light it. It does support random ID creation because the document won't exist by design:

addDoc(collectionRef, data)
db.collection("message").add(data)

To get a controlled, strict document creation, we must use a two-steps transaction where we first read then write and throw if a document exists.

upsert

An upsert works regardless if a document exists or not, and has the same result (idempotent). It is destructive, aka override any existing document: It has the effect of a creation:

setDoc(docRef, data)
docRef.set(data)

partial update

We assume the document already exists: we use the update pattern or the set merge pattern.

The update pattern is a strict update: it correctly fails if the document doesn't exist.

Both update and set merge expect a change object.

For update, the change fields replace the existing ones as-provided, the other fields are unchanged.

If we want to mutate a single property within an object field (aka mutate a sub-field), we target the sub-field directly, with a dot notation field:

const change = { displayName: "Johnny Appleseed" }
updateDoc(docRef, data)
docRef.update(data)

// sub-field
const change = { "address.city": "Lyon" }
updateDoc(docRef, data)

Note: We type the change as a Partial or a Pick of the document. If TypeScript complains about the dot notation, we use a separate version of updateDoc():

updateDoc(docRef, new FieldPath("address", "city"), "Lyon")

partial update with set

set comes with a merge option that changes its meaning: we are now providing a change object. The risk is to forget the merge option and override the document with a change object.

We provide the sub-fields we want to change. The other ones are preserved (deep merge):

const change = { address: { city: "Lyon" } } // it preserves the country field

setDoc(docRef, data, { merge: true })
docRef.set(data, { merge: true })

blind increment

We ask the server to increment the field by n, which may be negative for decrement. We skip a preemptive read since we don't care about the absolute value:

const partialUserDoc = {
    activityScore: increment(1),
}

docRef.update({
    count: FieldValue.increment(1),
})

delete field

We ask the server to delete a field. This shortcuts the need to fetch the document first and store it second omitting the given field:

updateDoc(docRef, {
    fleet: deleteField(),
})

docRef.update({
    fleet: FieldValue.delete(),
})

server timestamp field

Ask the server to generate a Firestore timestamp value.

updateDoc(docRef, {
    count: serverTimestamp(),
})

docRef.update({
    count: FieldValue.serverTimestamp(),
})

delete document

docRef.delete()
deleteDoc(docRef)

Batch writes

Instead of performing multiple individual writes, we gather them in a batch object and ask Firebase to commit all the writes at once. A single network request is sent.

It is atomic: if one write fails, the others fail as well. This prevents a broken state where only some documents are updated.

batch update from the client

Collect up to 500 writes in a batch object, then execute the batch with commit()

const batch = writeBatch(db)

batch.update(docRef1, { timezone: "Europe/London" })
batch.update(docRef2, { timezone: "Europe/London" })

await batch.commit()

batch update from the Admin SDK

In the admin SDK, we get a batch helper differently. The remaining code is the same.

const batch = db.batch()

// same code

other batch operations

batch.set(docRef, data)
batch.set(docRef, data, { merge: true })
batch.update(docRef, data)
batch.delete(docRef)
batch.create(docRef, data) // Admin SDK

Transaction

Read and write atomically with runTransaction.

The transaction guarantees that by the time we commit the write, the data on which we decided to act is still the same in the database (unchanged).

Outside a transaction, the data we read can change during the time window that separates the read hitting the database and the write hitting the database, and there is no check that prevents the write if the read data has changed.

Note: the Admin SDK locks the document during the read to write time-window, so there won't be retries. The client SDK doesn't lock the document. Instead, if data changes during the time window, a new read is done to account for the new value.

For example, if credits is positive and sufficient, we accept the purchase, but by the time we are about to commit the purchase, we want credits not to have changed since the read, otherwise we start the check process over again. This is the transaction pattern.

runTransaction

runTransaction expects a callback. transaction is a helper that holds the read and write methods (get, update, set).

Note that we await reads, but don't await writes, due to how runTransaction is implemented.

In case of failed preconditions, we abort the transaction with a throw.

Client SDK:

await runTransaction(db, async (transaction) => {
    // read
    const snapshot = await transaction.get(docRef)

    // check condition
    const currentCount = snapshot.data().count
    if (currentCount >= 10) throw Error("Sorry, event is full!") // Abort

    // proceed
    transaction.update(docRef, { count: currentCount + 1 })
})

Admin SDK:

await db.runTransaction(async (transaction) => {
    // identical API
})

Timestamp value type (advanced)

Storing dates as ISO strings is simpler to reason about and is more portable.

As the Firestore database comes with a native value type for storing dates called timestamp, we describe using this pattern in this article. The Firestore SDK comes with a Timestamp type that represents a timestamp field.

storing timestamps

As we attempt to store data, the SDK detects Date and Timestamp fields and assumes we want to store them as timestamps.

const user = {
    createdAt: new Date(),
    createdAt_: Timestamp.now(),
}

When preparing data to be transported through an HTTP request, the SDK serializes Date and Timestamp objects to objects with a single timestampValue property.

{
  "createdAt": { "timestampValue": "2025-10-07T18:47:13.279000000Z" },
  "createdAt_": { "timestampValue": "2025-10-07T18:47:13.279000000Z" }
},

The database detects this pattern and stores those field as timestamps.

receiving timestamps

Timestamp is the designed type to represent database timestamps. As we receive timestamp fields from the database, the Firestore SDK instantiates them as Timestamp objects.

Firestore Security rules

We define the security rules in the Firebase console or in a firestore.rules file. Firebase doesn't bill reads and writes denied by security rules.

rules version

rules_version = "2"

firestore scope

We start by scoping the rules to cloud.firestore

service cloud.firestore {
    // ...
    }

database scope

We scope the rules to the current database. This is boilerplate code: we don't use the database wildcard.

match /databases/{database}/documents {
    // ...
}

set rules for a given collection

We target a collection. The document ID wildcard variable holds the requested document ID. We can, for example, compare the user document's ID with the authentication data.

match /users/{user_id}{
    	// ...
}

operations and condition

allow operation, operation: if condition;

operations

read
create
update
delete

authentication, user ID

If the user is not authenticated, request.auth is null. We may filter out unauthenticated users:

allow read: if request.auth != null;

The user's authentication uid (if logged-in) is available as request.auth.uid:

request.auth.uid

Note: if auth is null, trying to read uid triggers a failsafe mechanism that denies the request.

green-light specific documents

We green light the document if its ID matches a criteria:

    match /players/{player_id} {
         allow read: if request.auth.uid == player_id;
    }

We green light the document if its field matches a criteria. resource.data represents the requested document. For example, we check the document's owner property against auth.uid.

    match /planets/{planet_id} {
         allow read: if request.auth.uid == resource.data.owner.id;
    }

If the document is missing the field, the request is denied.

get authorization information in a separate document

We read a different document with get(). It is a billed read.

get(/databases/$(database) / documents / users / $(request.auth.uid)).data.rank

This unlocks a pattern where we read some authorization data in a different document, such as the user document, which would store the user's entitlements or ranks. This may not be a good architecture.

For example, to require a specific rank:

    match /characters/{character_id} {
         allow update: if get(/databases/$(database)/documents/users/$(request.auth.uid)).data.rank == "Game Master";
    }

For example, we enforce that the requested character's zone is the same as the player's character's zone

match /overworld_characters/{overworld_character} {
     allow read: if get(/databases/$(database)/documents/characters/$(request.auth.uid)).data.zone == resource.data.zone;
}

payload validation

request.resource.data is the request's payload. We validate critical fields such as the document's owner.

request.resource.data.age > 0

  // A) user sends a post that mentions himself as uid. check the uid field.
  allow create : if request.auth.uid == request.resource.data.uid;

  // B) user modifies a post that mentions himself as uid
  // A) check the uid field.
  allow update,delete: if
  request.auth.uid == resource.data.uid
  &&
  request.auth.uid == request.resource.data.uid;

Note: We can instead forbid writes coming from the client and perform validation in a Cloud Function with TypeScript.

Storage

reference

file terminology, file patterns

Firebase Storage is a wrapper around Google's Cloud Storage, a cloud storage service. It is technically an object storage service because it stores immutable objects in a flat bucket, instead of files in a hierarchical filesystem.

Firebase Storage reintroduces the concept of files, folders and file hierarchy, primarily through the convention of using paths as object names, such as public/abc.png. The SDKs and docs use the term file instead of objects.

project's default bucket (implementation detail)

A Firebase project is given a default bucket, with a given URI. The bucket's URI serves to distinguish it from other ones. It is made of two components: a gs:// prefix and a domain name. The default bucket's domain uses the project's name, which makes it globally unique. If we add another bucket, we pick a globally unique name by ourselves:

"gs://<PROJECT-ID>.firebasestorage.app"
"gs://<PROJECT-ID>.appspot.com" // old default bucket URIs

"gs://<GLOBALLY-UNIQUE-ID>" // non-default bucket URI

The URIs are not HTTP URLs: no data is served if we force HTTP URLs out of them.

storage helper

We get a storage helper.

  • Firebase exports a storage variable so we use another name for the helper.
  • The client SDK uses the default bucket unless we specify another one in the initializer:
const storageService = getStorage(app)
const storageService = getStorage(app, "gs://...")

File references and metadata

file path

A file is uniquely identified by its path in the bucket. It includes the file extension.

file reference

We use references to interact with files. We build them with file paths:

const fileRef = ref(storage, "tts/2F14Izjv.mp3")
const fileRef = bucket.file("tts/2F14Izjv.mp3") // admin SDK

The file reference does not guarantee the file existence. The properties are of limited use:

ref.bucket // "abc.firebasestorage.app"
ref.fullPath // "tts/abc.mp3"
ref.name // "abc.mp3"

// computed references
ref.parent // ref(storage, "tts")
ref.root // ref(storage, "/")

file metadata

We fetch an existing file's metadata:

const metadata = await getMetadata(fileRef) // client SDK

It is a FullMetadata instance:

// repeat from fileRef
metadata.bucket
metadata.fullPath
metadata.name

metadata.size // 1048576 (bytes)
metadata.contentType // "audio/mpeg" (MIME type)
metadata.timeCreated // "2026-01-04T12:34:56.789Z"

metadata.ref // file reference

List files and folders

folders and prefix terminology

The API describes folders as prefixes, but the docs also mention folders.

folder existence

A file, by its name alone, creates nested folders, when its name contains subpaths. For example, abc/def/hello.pdf creates two folders: abc and def. Those folders do not exist on their own: they are an artificial byproduct.

By design, those folders can't be empty, because they derive from a nested file.

use a folder reference to list its content

We build a reference to a folder to list its content. It is a shallow list: we see the top level files and folders.

The list discriminates files (items) from folders (prefixes), putting them into separate arrays. The item itself is exposed as a reference (StorageReference) regardless if it is a file or a folder.

Note: list() is a capped version that expects a count limit.

folderRef = ref(storage, "uploads")

const result = await list(directoryRef, { maxResults: 100 })
// const result = await listAll(directoryRef)

result.items // StorageReference[]
result.prefixes // StorageReference[]

Read, download files

general considerations

  • The client SDK is subject to access-rules. Some functions allow the user, after access control, to save a bearer URL which is not subject to security rules (one-off access control).
  • Download workflows are influenced by the browser restrictions.

get a HTTP URL on the client

We request a read URL. Access control is performed when requesting the URL.

The returned URL is a bearer URL, which is not subject to access-control. We consume it outside the realm of the Storage SDK, as a regular URL.

Note: the URL remains valid unless manually revoked at the file level in the Firebase Console.

getDownloadURL(fileRef).then(url => ...)

consume a cross-origin HTTP URL on the client.

The URL is cross-origin. The challenges and patterns to consume a cross-origin URL are not specific to Firebase.

The way we consume the URL determines if CORS headers are necessary.

  • The browser allows cross-origin URLs in media elements' src attribute (hot linking), with no CORS headers required.
  • The browser allows navigating to cross-origin URLs (basic browser behavior). For example, we navigate to an image in a new tab.
  • The browser doesn't allow background fetch of cross-origin resources unless explicit CORS headers are present on the server. This applies to fetch() and functions that rely on it.

Buckets do not have permissive CORS headers by default, but we can add them on demand. CORS headers whitelist one, several or all domains. We use gsutil or gcloud to whitelist our domain (see the dedicated chapter).

download a Blob with the client SDK

A blob is an opaque object that we fetch and transform to a local URL for easier saving. When using getBlob():

  • access rules are enforced
  • CORS headers are required (it uses fetch() under the hood)

We create a local (same-origin) URL out of the blob, to avoid the browser restrictions against cross-origin_ URLs. It restores the ability to download content through a single click, without navigating to a different URL (see below).

getBlob(fileRef).then((blob) => {
    // create a local URL and trigger download imperatively
})

add the download attribute to an anchor tag, guard against cross origin URLs

The download attribute on anchor tags (<a href="" download>) offers one-click downloads for same-origin URLs or local URLs.

For cross-origin URLs, clicking the anchor tag triggers standard browser navigation instead: the browser navigates to the resource and shows its full URL.

create a local URL out of a blob (browser specific)

This example also triggers download programmatically, and revokes the local URL for clean up. We set download to the file name.

// 3. Create a local URL out of the blob
const objectURL = URL.createObjectURL(blob)

// 4. Use the local URL to trigger the download
const link = document.createElement("a")
link.href = objectURL
link.download = img.id + ".png"
document.body.appendChild(link)
link.click()
document.body.removeChild(link)

// 5. Clean up by revoking the local URL
URL.revokeObjectURL(objectURL)

Upload data

client SDK

upload a Blob or a File

We prepare some data in a JavaScript Blob or File object, and upload it to the reference.

const result = await uploadBytes(fileRef, file)
  • The upload is a non-conditional upsert which overrides existing files.
  • It makes the file immediately downloadable through the SDK read functions.
  • On success, we receive an UploadResult, which wraps the bucket file's metadata and the file reference.
result.metadata // FullMetadata
result.ref

(advanced) upload and track the progress

For each tick, we receive a snapshot. We may show the upload progress.

const uploadTask = uploadBytesResumable(ref, file)

uploadTask.on(
    "state_changed",
    /* on snapshot */
    function (snapshot) {
        // snapshot.bytesTransferred
        // snapshot.totalBytes
        // snapshot.state // "paused" | "running"
    },
    function (error) {},
    function () {
        /* on completion */
        getDownloadURL(uploadTask.snapshot.ref).then(/**/)
    }
)

admin SDK

upload a Node.js Buffer and make it downloadable

We prepare some data in a Node.js Buffer, and upload it to the reference.

await fileRef.save(imageBuffer, {
    resumable: false,
    metadata: {
        contentType: `image/png`,
        cacheControl: "public, max-age=31536000, immutable",
    },
})

Note: it doesn't make the file downloadable for clients: a client getDownloadURL() fails. This is because the underlying GC storage object is missing a Firebase-specific download token in its metadata.

To make it downloadable for clients, we use the admin SDK's getDownloadURL(). It adds a permanent download token to the underlying GC storage. It also returns the bearer URL (tokenized URL that embeds this very access token, and is not subject to security rules).

We can store it in a database, return it to the client, or discard it and let the client SDK generates the URL on its own with getDownloadURL() (since it is now downloadable).

const url = await getDownloadURL(fileRef)

We can invalidate the access token from the Firebase console. It makes the file non-downloadable. The bearer URL becomes invalid.

advanced: read and write the token

The token, if present, is in the File's metadata field. We should avoid setting this field manually when using save(). We use getDownloadURL instead (see full example below).

metadata: {
  firebaseStorageDownloadTokens: token
}

upload image example (admin SDK)

We upload an image and make it readable by clients. We may store the bypass URL.

// 1.0 create a file reference
const fileRef = bucket.file(`generated/${userID}/cat.png`)

// 1.1 create a Buffer object
const imageBuffer = base64ToBuffer(base64Data)

// 1.2 upload the Buffer object
await fileRef.save(imageBuffer, {
    resumable: false,
    metadata: {
        contentType: `image/png`,
        cacheControl: "public, max-age=31536000, immutable",
    },
})
//  1.3 make it readable by client SDKs (generate a token).
const url = await getDownloadURL(fileRef)

//  1.4 store the bypass URL (if applicable)
//  ...

Setting the bucket CORS header

Some read operations require the client's domain to be whitelisted by a CORS header. We add authorized domains to cors.json and send it to Google through the CLI:

cors.json

[
    {
        "origin": ["https://imagetales.io", "http://localhost:5173"],
        "method": ["GET"],
        "maxAgeSeconds": 3600
    }
]

Register cors.json:

gcloud storage buckets update gs://abc.firebasestorage.app --cors-file=cors.json

(Debug) Describe the existing bucket CORS config:

gcloud storage buckets describe gs://abc.firebasestorage.app --format="default(cors_config)"

read operations that require a CORS whitelist

Browser reads relying on background fetch rather than navigating to the URL require a CORS whitelist:

  • getBlob(fileRef) to get a Blob, which uses fetch() under the hood.
  • getBytes(fileRef) to get an ArrayBuffer, which uses fetch() under the hood.
  • using fetch() manually with a bearer (tokenized) URL.

Cloud Functions

Cloud Functions are serverless functions: we run code on servers operated by Google.

As it is a secure environment, we run sensitive tasks: authenticate requests, perform server-side validation, use API keys, make sensitive writes to the database, and more.

Functions trigger on spontaneous requests, or on events happening in the Firebase ecosystem, such as the registration of new users through Firebase Auth.

react to spontaneous requests: two options

The first option is to establish a bare-bones REST-API endpoint, called a HTTP function. It exposes a regular REST API endpoint, with an Express.js like API.

The second option is to establish a Callable function, a pattern that involves both a server SDK and a client SDK, which work hand in hand to provide a better developer experience, such as managing authentication.

onRequest and onCall are the two helpers to define those function:

import { onRequest, onCall } from "firebase-functions/https"

select and deploy functions

The main file, through the functions that it exports, determines the functions to be deployed. The main file is the one we set in package.json. It must be a JavaScript file:

{
    "main": "lib/index.js"
}

It is usually a barrel file that re-exports functions from their own files:

export { requestPlayer } from "./requestPlayer.js"

We deploy functions imperatively, all or a few:

firebase deploy --only functions
firebase deploy --only functions:requestPlayer
firebase deploy --only functions:requestPlayer,functions:requestPlanet

To delete a function, we remove it from the main file and run its deploy command. The CLI detects its absence and prompts us for confirmation.

define functions with TypeScript

We use a workflow that transpiles to JS since the main file must be JavaScript. The convention is to store TypeScript code in src/ and transpile towards lib/. The main file is lib/index.js.

tsconfig.json configures the transpilation, targeting the Node.js runtime:

{
    "compilerOptions": {
        "module": "NodeNext",
        "moduleResolution": "nodenext",
        "outDir": "lib",
        "esModuleInterop": true,
        "noImplicitReturns": true,
        "noUnusedLocals": true,
        "sourceMap": true,
        "strict": true,
        "target": "es2020"
    },
    "compileOnSave": true,
    "include": ["src"]
}

We ask the transpile to be continuous with the watch flag. Then, the emulator detects changes in the generated JS files, and updates the emulator services:

tsc -w

admin SDK

Within cloud functions, we interact with other Firebase services through the admin SDK. For example, we work with the project's Firestore database:

import { initializeApp } from "firebase-admin/app"
import { getFirestore } from "firebase-admin/firestore"

const app = initializeApp()
const db = getFirestore(app)

Define Callable functions

The code we run in a Callable function has access to the user authentication status and the request's data.

Callable functions support streaming the response: we describe the pattern in a dedicated section.

Overview and syntax

synopsis

onCall<ReqData, Promise<ResData>>(callback)
onCall<ReqData, Promise<ResData>>(options, callback)

the callback

The callback has access to the request object (CallableRequest), which exposes auth and data.

We define the callback async so it returns a promise. The connection is kept open until the promise settles.

onCall<ReqData, Promise<ResData>>(async (request) => {
    request.auth // AuthData | undefined
    request.auth?.uid

    request.data // ReqData

    return { message: ".." } // ResData
})
  • auth is undefined when the request is unauthenticated. It has uid otherwise.
  • ReqData defines the data sent by clients.
  • ResData defines what the callback returns.

add options

onCall accepts an optional options object, of type CallableOptions, a subclass of GlobalOptions, as the first argument.

const options: CallableOptions = {
    concurrency: 1,
    minInstances: 1,
    maxInstances: 1,
    region: "europe-west1",
}

concurrency sets how many requests a single instance processes in parallel. By default, an instance processes multiple requests in parallel. We set it to 1 for sequential processing, assuming we also set maxInstances to 1.

minInstances default to 0. To avoid cold starts, we can set minInstances to 1 but it costs more because it is kept warm.

We can limit maxInstances to 1.

Streaming version

Streaming the response means to send small chunks of data with sendChunk().

The third type argument (StreamData) defines what kind of chunk we stream. We usually stream string chunks.

The request exposes acceptsStreaming, which we read to check if the client supports streaming. When it does, the callback has access to an extra response argument, on which we call sendChunk().

onCall<T, U, V>(options, callback) // streaming Callable
onCall<ReqData, Promise<ResData>, StreamData>(async (request, response) => {
    if (response.acceptsStreaming) {
        response?.sendChunk("abc") // StreamData
        response?.sendChunk("def")
    } else return { message: ".." } // fallback
})

Patterns

halt and send an error immediately

We throw an HttpsError with a specific error code which conforms to a predefined list. It defaults to internal error if omitted.

throw new HttpsError("unauthenticated", "unauthenticated")

logger

logger.debug("")
logger.info("")
logger.warn("")
logger.error("")

Callable v1 (deprecated)

define the function

functions.https.onCall(async (data, context) => {
    const auth = context.auth
    const message = data.message
    return { message: ".." }
})

the context object

The context object provides the authentication details, if any, such as the email, and the request metadata such as the IP address, or the raw HTTP request. It is of type CallableContext

check authentication

if (!context.auth) {
    throw functions.https.HttpsError("unauthenticated", "you must be authenticated")
}

Invoke Callable functions

We get a reference to the callable function, and call it like a regular function.

get a functions helper: set the firebase project and the region

Since a client can interact with Cloud Functions from separate Firebase projects, we specify the project we target. We do so indirectly, by providing the app helper, which already identifies the project.

Since a cloud function can live across regions as separate regional instances, we specify the region we target. We use one of the regional identifiers that we set in the Callable options. If omitted, the client SDK targets us-central1, which errors if no instance runs there.

const functions = getFunctions(app, "europe-west1")

get a handle over the Callable function

We also provide the type arguments:

const requestPokemonCF = httpsCallable<ReqData, ResData>(functions, "requestPokemon")

invoke and handle the result

The payload, if any, is of type ReqData. The returned value is of type HttpsCallableResult<ResData>. We read the data property:

const result = await requestPokemonCF({ number: 151 })
result.data // ResData

HTTP functions

overview

Establish a bare-bones REST-API endpoint, called a HTTP function. We expose a regular REST API endpoint.

We respond with JSON, HTML, or plain text:

export const sayHello = onRequest((req, res) => {
    res.send("Hello from Firebase!")
})

add options

const options = {
    region: "europe-west1",
    cors: true,
}
export const sayHello = onRequest(options, (req, res) => {})

ExpressJS concepts and syntax

Req and res objects have the shape of expressJS req and res objects. We can add middleware.

call the endpoint: standard HTTP request (not firebase specific)

We read the function's URL at deploy time.

We consume endpoints like regular REST API endpoints. They URL that looks like this:

https://requestPlanet-x82jak2-ew.a.run.app

Run functions on Auth events

Register functions that listen and react to Firebase Auth events.

Blocking functions

run a function before the user is added to Firebase Auth

The function is blocking: We perform validation, and, if applicable, throw an error to deny the registration. Firebase Auth aborts user creation on throw. The Auth client SDK receives such error and can display it to the user"

export const onRegisterBlocking = beforeUserCreated(options, async (event) => {
    const user = event.data // AuthUserRecord === UserRecord
    // user.uid
    // user.email
    if (user?.email?.includes("@hotmail.com")) {
        throw new HttpsError("invalid-argument", "don't use hotmail")
    }
    // create the user in the database first, then return
    await createDefaultDataForUser(user)
    return
})

Non blocking functions

The non blocking functions run after a user has been created (or deleted) by Firebase Auth.

As of writing, there is no v2 version for the non blocking functions.

export const f = auth.user().onCreate(async (user) => {})
export const g = auth.user().onDelete(async (user) => {})

example: add the user to the Firestore database

We read the auth user's uid and create a user document with it:

export const onRegisterNonBlocking = region("europe-west1")
    .auth.user()
    .onCreate(async (user) => {
        const { uid, email } = user
        await db.collection("users").doc(uid).set({ uid, email })
    })

example: delete the user from the Firestore database

export const onDeleteAccount = region("europe-west1")
    .auth.user()
    .onDelete(async function (user) {
        const { uid } = user
        await db.doc("users/" + uid).delete()
    })

on Firestore and Storage events

on Firestore events

Run Cloud functions on database events. They are non-blocking: they run after writes. We use the term sanitization instead of validation, since they don't prevent writes.

sanitize data post-write

v2

export const onUserWritten = onDocumentWritten("users/{docId}", (event) => {
    const change = event.data
    const docId = event.params.docId

    const before = change.before.data()
    const after = change.after.data()
})

on Storage events

sanitize data post-upload

the user uploads a file to Firebase Storage. Sanitize data post-upload. For example:

exports.generateThumbnail = functions.storage.object().onFinalize(async (file) => {
    const fileBucket = file.bucket
    const filePath = file.name
    const contentType = file.contentType
    const metageneration = file.metageneration
    // Number of times metadata has been generated. New objects have a value of 1.
})

Create a thumbnail for an uploaded image.

JS Dates and Callable Functions

ISO strings are the better choice

When interacting with Callable Functions, it's best to represent dates as ISO strings. It is simple to reason about: the value and the type stay consistent on the client and on the server.

If we were to work with Date fields or Firestore Timestamps fields, the values are not consistent when sending to the server and when receiving from the server.

In this article, we explain what happens when we send Date and Timestamp objects to Callable Functions or when we receive them from Callable Functions. Before being sent, both are serialized to JSON.

sending to Callable Functions

Timestamp is a Firestore specific type and doesn't get a special treatment: it serializes to an object with seconds and nanoseconds (through toJSON()).

timestamp: { seconds: 1696751687, nanoseconds: 527000000 },

As for fields of type Date, they serialize to an ISO string (through toJSON()):

date: "2023-10-08T07:54:47.527Z"

We could technically instantiate a Timestamp or a Date:

new Timestamp(timestamp.seconds, timestamp.nanoseconds)
new Date(date)

sending from Callable functions

If we attempt to return a Date object, it serializes to an ISO string.

If we attempt to return a Timestamp object, it serializes to the internal representation, possible an object with _seconds and _nanoseconds. We should avoid this pattern.

Environment variables

firebase secrets pattern

We provide secrets through the CLI tool:

firebase functions:secrets:set ABC_API_KEY

We then set a secret whitelist for each function, allowing it to access the given secrets:

const options: CallableOptions = {
    region: "europe-west1",
    secrets: ["ABC_API_KEY"],
}

onCall<ReqData, Promise<ResData>>(options, async (request) => {
    const abcKey = process.env.ABC_API_KEY
})
const options = { secrets: ["ABC_API_KEY"] }

onRequest(options, (req, res) => {
    const abcKey = process.env.ABC_API_KEY
})

debug secrets

We list the project's secrets:

gcloud secrets list --project <PROJECT_ID>

.env file pattern

The .env file pattern is worse. It is fine for local debugging. We set the env variables in a non versioned .env file.

ABC_API_KEY=xxx

On deploy, the .env file is automatically detected and deployed along functions. See env-variables docs

The way to access is the same: we read process.env within cloud functions

process.env

Debug Functions locally

start the functions emulator

We run the functions on their own (serve), or along other emulated services.

npm run serve
firebase emulators:start --only functions
firebase emulators:start --import emulator-data --export-on-exit

Callable functions are designed to be called from the client SDK. We can bypass this requirement locally:

invoke callable functions outside the client SDK

functions:shell starts the functions emulator and starts an interactive CLI shell from which we invoke callable functions with a payload.

firebase functions:shell
npm run shell # alternative

We provide the mandatory data property. It holds the payload:

requestArticles({ data: { name: "Lena" } })

We can also invoke them with curl

curl -s -H "Content-Type: application/json" \
  -d '{ "data": { } }' \
  http://localhost:5001/imagetale/europe-west1/get_images

wire the client to the emulator

We redirect invocations towards the emulated functions, but only on localhost:

if (location.hostname === "localhost") {
    // ...
    connectFunctionsEmulator(functions, "localhost", 5001)
}

invoke emulated HTTP functions

We invoke HTTP functions with a HTTP request. The URL pattern is specific to the emulator.

http://localhost:5001/imagetale/europe-west1/get_images

Schedule execution: Cron jobs

schedule periodic code execution

To define a schedule, we set both the periodicity and the timezone. To set the periodicity, we use strings such as every day 00:00 or every 8 hours. Then we add the callback function.

export const updateRankingsCRON = onSchedule(
    {
        schedule: "every day 00:00",
        timeZone: "Europe/London",
        region: "europe-west1",
    },
    async () => {
        // ...
    }
)

The former version (v1) uses a different API:

export const updateRankingsCRON = functions.pubsub
    .schedule("every 8 hours")
    .timeZone("Europe/London")
    .onRun(async (context) => {
        // ..
    })