Overview

Firebase allows developers to create a fully featured backend on top of servers and APIs operated by Google.

overall benefits

  • solid developer experience
  • it scales to world-scale use
  • generous free-tier and pay-as-you-go model
  • high quality docs, well supported by AI models
  • actively developed and maintained

main backend components covered in this document

  • authentication with Firebase Auth
  • database with Cloud Firestore
  • storage with Cloud Storage
  • serverless functions with Cloud Functions

focus of this document: web-centric

We create a backend for web-apps, and use the web-centric client SDKs. We default to TypeScript, and pick Node.js as the runtime for Cloud Functions.

CLI tool

The Firebase CLI tool enables several workflows:

  • Emulate the Firebase backend locally, to run it and debug it at no cost.
  • Scaffold the Cloud Functions' directory, and deploy Cloud Functions.
  • Submit secrets or API keys to Google, to make them available in Cloud Functions.
  • Add and Deploy security rules.
  • List the Firebase projects linked to the Google account.

the CLI executable

The firebase-tools npm package provides the firebase CLI executable.

npm install -g firebase-tools
firebase

Release notes

underlying Google account

Firebase projects are linked to a Google account.

firebase login
firebase login:list # prints current Google account
firebase logout

list projects and select one

firebase projects:list
firebase use imagetales

project configuration and scaffolding

The init command enables several workflows, among which:

  • scaffold the Cloud Functions directory
  • set up and configure the emulators
  • add security rules for Firestore and Cloud Storage
firebase init

help

  • print the list of Firebase commands.
  • print the details about a given command.
firebase help

firebase help emulators:start
firebase help deploy

deploy functions and manage secrets

firebase init
firebase functions:list

firebase deploy --only functions
firebase deploy --only functions:requestPlanet
firebase functions:secrets:access ABC_API_KEY
firebase functions:secrets:set ABC_API_KEY
firebase functions:secrets:destroy ABC_API_KEY

start emulators

firebase emulators:start
firebase emulators:start --import emulator-data --export-on-exit

We specify which emulators to run in firebase.json. We provide the port, or an empty object to use the default port. We scaffold this file with firebase init.

{
    "emulators": {
        "firestore": { "port": 8080 },
        "auth": { "port": 9099 },
        "functions": { "port": 5001 },
        "storage": { "port": 9199 },
        "ui": { "enabled": true }
    },
    "storage": { "rules": "storage.rules" },
    "firestore": {
        "rules": "firestore.rules",
        "indexes": "firestore.indexes.json"
    },
    "functions": [
        /* ... */
    ]
}

deploy security rules

The storage emulator requires storage access rules. We define Storage rules in storage.rules. We define Firestore rules in firestore.rules

firebase deploy --only storage
firebase deploy --only firestore:rules

gcloud: Google Cloud CLI tool

gcloud enables some operations not available in the firebase tool, such as listing secrets of a given project or describing a Storage bucket.

We call gcloud from the Google Cloud Console's Cloud Shell (it is pre-installed), or we install it locally from an archive provided by Google.

gcloud secrets list --project <PROJECT_ID>
gcloud storage buckets describe gs://abcd.firebasestorage.app

SDKs

Interact with the backend with the help of SDKs.

client SDKs

The client SDKs run on unprivileged clients, such as browsers. The JavaScript SDK primarily runs in browsers but can also run in a Node.js app that wants to act as an (unprivileged) client.

npm i firebase

admin SDK: privileged environments

The admin SDK is designed to run on secure, privileged environments.

The admin SDK authenticates itself against Google servers by using a privileged account called a service account. Service accounts are automatically created by Google, are scoped to a Firebase project and have specific entitlements. The admin SDK skips user-centric authentication and is not subject to security rules (which are designed to control untrusted requests).

We primarily use the admin SDK within Cloud Functions, an environment pre-configured by Google with the appropriate service account. The admin SDK detects it and uses it.

We use the Node.js admin SDK:

npm i firebase-admin

Cloud Functions SDK

We define Cloud Functions with the (Node.js) Cloud Functions SDK.

We have the package listed as a dependency after scaffolding the Cloud Functions directory with firebase init.

"firebase-functions": "^7.0.0",

Project setup and initialization

identify the Firebase project (client SDK)

The config object stores credentials to identify the Firebase project when interacting with Google servers. These credentials are not sensitive or confidential per se since they only serve to identify the project, and they are exposed on the client.

const firebaseConfig = {
    apiKey: "....",
    authDomain: ".....firebaseapp.com",
    projectId: "....",
    storageBucket: ".....firebasestorage.app",
    messagingSenderId: "....",
    appId: "....",
}

register one or more configs

We give the config to the client SDK. It returns a helper object that we initialize other services with.

const app = initializeApp(firebaseConfig)

When working with several Firebase projects, we get a helper for each project. The first helper has a "[DEFAULT]" internal string identifier. We must provide a string identifier for additional project we want to work with.

const app1 = initializeApp(firebaseConfig1)
const app2 = initializeApp(firebaseConfig2, "two")

note: On Cloud Functions, the environment is automatically configured: we don't have a config object at all, and we get a helper config-less.

const app = initializeApp()

Auth Overview

authenticate app users

The Auth SDK aims to authenticate users and notify the app of Auth events. It provides several authentication flows.

auth helper and reading currentUser across the app

We keep a reference to the auth helper to read currentUser. We also provide the helper when using some auth related functions.

const auth = getAuth(app)
auth.currentUser // User | null

currentUser starts as null. When the SDK has finished loading, and given that the user has logged-in, currentUser switches to a User instance.

As a User instance, It holds the user unique identifier (uid). Other properties may be empty:

currentUser.uid
currentUser.email
currentUser.phoneNumber
currentUser.displayName
currentUser.isAnonymous

react to authentication events

We register a callback on onAuthStateChanged, which Firebase runs on auth events. Firebase gives us a user object (of type User | null).

onAuthStateChanged(auth, (user) => {
    if (user) {
        // user.uid
    }
})

Auth events:

  • the auth SDK has finished loading and no user is authenticated

  • the user has registered (sign up)

  • the user has logged in (sign in)

  • the user has logged out (sign out)

Login occurs in three specific scenarios:

  • the user fills the standard login form or logs in through an identity provider (hard-login)
  • the user is recognized by the SDK and is logged in automatically (credentials stored in browser)
  • (canonically a registration) the user is automatically logged-in after a successful sign-up. Note: a single authentication event occurs.

React patterns

We make the authentication status part of the React state. For example, we work with a isSignedIn variable. We make the display of the authenticated area conditional on isSignedIn being true.

On page load, the Auth SDK is loading: If we initialize isSignedIn to false, it may not reflect the Auth reality, and may instantly switch to true once the SDK is loaded, which may trigger a UI flicker.

It's best to wait for the SDK to load before making any use of isSignedIn. As such, we track the loading state in a one-off state variable, which becomes true on the first authentication event. Only then do we read isSignedIn.

const [hasLoaded, setHasLoaded] = useState(false)
const [isSignedIn, setisSignedIn] = useState(false)

useEffect(() => {
    const unsub = onAuthStateChanged(auth, (user) => {
        setHasLoaded(true)
        setisSignedIn(Boolean(user))
    })
    return unsub
}, []) // subscribe once, subscribe automatically.

if (!hasLoaded) return null
if (!isSignedIn) return <Lobby />
return <Ingame />

sign out

sign out is consistent across all authentication flows:

signOut(auth)

Email-Password accounts

A provider that relies on collecting the user's email and password.

registration and hard-login

createUserWithEmailAndPassword(auth, email, password).then((credential) => {
    credential.user // User
})
signInWithEmailAndPassword(auth, email, password).then((credential) => {
    credential.user // User
})

send a password reset email

We ask Firebase to send a password-reset email to the provided email. We can customize the email content through the Firebase console:

sendPasswordResetEmail(auth, email)

email account's providerData (implementation detail)

Note: "password" is the providerId for the email-password provider.

{
    "providerData": [
        {
            "providerId": "password",
            "uid": "user@example.com",
            "email": "user@example.com",
            "displayName": null,
            "phoneNumber": null,
            "photoURL": null
        }
    ]
}

Identity Providers

We target users having accounts with external providers, such as Google accounts or Apple accounts.

select one or several providers

Note: We enable providers in the Firebase console.

const provider = new GoogleAuthProvider() // Google Provider

authentication flows

Alternative flows:

  • the user authenticates through a popup window.
  • the user authenticates through a redirect.

Flows handle both sign-in and sign-up: we describe a flow with a generic control label:

  • "Authenticate with Foo"
  • "Continue with Foo"

Both flows trigger an authentication event on success. They return a credential (UserCredential):

const credential = await signInWithPopup(auth, provider)
credential.user // User

Note: We can detect it is a new user through a helper method:

const userInfo = getAdditionalUserInfo(credential)
if (userInfo?.isNewUser) {
}

popup flow

The popup flow may fail if the browser doesn't allow popups.

const credential = await signInWithPopup(auth, provider)

redirect flow

The redirect flow relies on navigating to another page and navigating back.

It requires extra work unless the website is hosted on Firebase Hosting.

Anonymous account

Register an account with no personal information from the user.

signInAnonymously(auth)

The generated credentials are stored in the browser: the user cannot access the account from other devices, and cannot recover the account if credentials are lost.

When it comes to Auth-triggered Cloud Functions, the creation of an anonymous account:

  • triggers user().onCreate()
  • doesn't trigger the beforeUserCreated() blocking function (not supported yet).

check if the account is anonymous

On the client, we check isAnonymous:

auth.currentUser?.isAnonymous // true for anonymous accounts

In auth-triggered Cloud Functions, we read providerData (from the UserRecord).

export const onRegisterNonBlocking = auth.user().onCreate(async (user) => {
    user.providerData.length === 0 // true for anonymous accounts
})

convert to a non-anonymous account

We link to another provider. Since the user already exists (currentUser), we provide it to the link function.

Link to an email credential, after collecting the email address and password:

const cred = EmailAuthProvider.credential(email, password)
await linkWithCredential(auth.currentUser, cred)

Link to an identity provider, with a popup:

const provider = new GoogleAuthProvider()
const result = await linkWithPopup(auth.currentUser, provider)

Firestore

conceptual

Firestore is a NoSQL database that is most similar to MongoDB. It's made of collections and documents.

A collection is a set of documents.

A document is a set of fields. A document may contain 20k fields and 1 MiB of data. A field holds some primitive data (number, string...)

A reference serves to identify a collection or a document in the database. It doesn't guarantee the collection or document existence: It's merely a path (that may point to nothing).

references

packages and import paths

firebase-admin is a wrapper around @google-cloud/firestore. It has the same syntax and capabilities.

"firebase/firestore" // client SDK
"firebase/firestore/lite" // client SDK

"firebase-admin/firestore" // admin SDK

init and db helper object

We init a db object with the app helper, for use in Firestore-related functions.

const db = getFirestore(app)

Collection

Collection Reference

collection reference usage

We provide the collection reference to:

  • fetch all documents - getDocs(colRef)

  • build a query targeting the collection - query(colRef, filters..)

  • build a random-ID document reference - doc(colRef), or one that refers to a specific document - doc(colRef, docId)

  • add a document to the collection, with a random ID generated on the fly - addDoc(colRef, data).

build a collection reference

We use a path to identify the collection (uniquely). Root collections have a simple path, such as "users" (no starting slash). Sub-collection paths are made from several components.

We provide the path as:

  • a single string, with slash separators.

  • a sequence of strings, with no slash separators.

const colRef = collection(db, "users")
const colRef = collection(db, `users/${uid}/custom_list`)
const colRef = collection(db, "users", uid, "custom_list")
const colRef = db.collection(`users/${uid}/custom_list`) // sane

TypeScript: set the document's type at the collection level.

Collections are schema-less: they don't define the shape of their documents.

When receiving document data from the database, the client SDK checks the actual data and instantiates documents with it. The instantiated documents may be of any shape and may be different from one another.

The instantiated documents are typed as DocumentData, which is a loose type that doesn't provide information about the content.

We should provide a more precise type. We set it at the collection reference level. We do it through a type assertion:

const colRef = collection(db, "players") as CollectionReference<Player, Player>

Instantiated documents are now of type Player.

Converter

The SDK supports having two document types on the client:

CollectionReference<AppModelType, DbModelType>

DbModel, which is DocumentData by default, represents the shape instantiated by the SDK when receiving data.

If we want to transform instantiated documents into a different shape for use within the app, we use a converter.

AppModel, which is also DocumentData by default, is the type parameter that represents the type after conversion. We set it to whatever type the converter converts to.

Before sending to Firestore, the converter should transforms back AppModel to DbModel.

Transformation examples:

  • DbModel has a Timestamp field but we want AppModel to have a Date field.
  • We add properties to AppModel, that are not present on DbModel.

implement the converter

We transform the documents at the app boundaries:

  • upon receiving from Firestore (fromFirestore())
  • upon preparing to send to Firestore (toFirestore())

We define two functions and add them to the converter.

fromFirestore() takes the snapshot as instantiated:

fromFirestore(snapshot: QueryDocumentSnapshot<FirestoreWorkout>): Workout{
		// to client shape
		const firestoreWorkout = snapshot.data()
		const workout = { ...firestoreItem, date: firestoreItem.date.toDate()}
     return workout
    }

toFirestore() takes the object in its app-side shape.

toFirestore(workout: Workout) {
		// to database shape
        	return { ...workout, date: Timestamp.fromDate(workout.date)}
    }

We gather the transforms in the converter (FirestoreDataConverter). While the type may be inferred from the transforms, we may still add them for safety.

// FirestoreDataConverter<AppModel, DbModel>
const myConverter: FirestoreDataConverter<Workout, FirestoreWorkout> = {
    toFirestore() {},
    fromFirestore() {},
}

We attach it to the collection reference to let it type its documents.

const colRef = collection(db, "players").withConverter(conv)

Document

Document reference

The document reference identifies a document within the database, and embeds meta information:

docRef.id // "Nk....WQ"
docRef.path // "users/Nk....WQ"
docRef.parent // colRef

use document reference

We provide the reference for most CRUD operations:

  • create the document, or override an existing one (upsert): setDoc

  • update an existing document (it errors if the document isn't found): updateDoc

  • read the document: getDoc

  • delete the document: deleteDoc

build a document reference

The document's path identifies it uniquely. We provide the path as a single string or build it from string components.

const docRef = doc(db, "users", id) // string components
const docRef = doc(db, "users/Nk....WQ") // single string

const docRef = collectionRef.doc("NkJz11WQ") // admin sdk

Alternatively, we provide the collectionRef and the document ID, or the just the collectionRef. In the latter case, the SDK builds the ref with a randomized ID.

const docRef = doc(collectionRef, id)
const docRef = doc(collectionRef) // randomized ID

attempt to read document at reference

The get operation succeeds even if no document exists at the given reference: it is an attempt. Checking for a document existence is a valid use case.

The function returns a Document snapshot unconditionally, which may be empty:

getDoc(docRef) // DocumentSnapshot
docRef.get() // DocumentSnapshot

Document snapshot

The document snapshot is a wrapper that doesn't guarantee the document existence. It contains an instantiated DocumentData document or undefined.

Note: We may have provided a more specific type than DocumentData at the collection level as a type argument.

Note: data() is a function because it technically accepts some configuration.

docSnapshot.exists()
docSnapshot.data() // DocumentData | undefined

It also contains helpers and metadata.

docSnapshot.id // NkJz11WQ...7f
docSnapshot.ref // DocumentReference
docSnapshot.metadata // SnapshotMetadata

Query a specific field

docSnapshot.get("address.zipCode") // low use

real-time listener

Set up a real-time listener on a document reference:

const unsub = onSnapshot(docRef, (docSnapshot) => {
    docSnapshot.data() // DocumentData | undefined
})

Query

overview

A query aims to match documents based on a set of criteria instead of using pre-defined references.

the result of a query: a query snapshot

The SDK instantiates a query snapshot, a thin wrapper over a list of document snapshots (docs). The list is empty in case of no match.

The snapshots are of type QueryDocumentSnapshot, but the type has the same API surface than DocumentSnapshot.

querySnaptshot.docs // list of document snapshots
querySnaptshot.empty

A QueryDocumentSnapshot is guaranteed to have an underlying document at snapshot.data() (this is the difference from DocumentSnapshot).

const cats = querySnapshot.docs.map((snap) => snap.data())

a collection reference is technically a query

A collection ref is technically a query and can be used as so: in that case, we receive all documents:

getDocs(colRef) // getDocs(q)

colRef.get() // q.get()

build a query

build a query

We query documents that match some criteria. We request a specific order and limit the document count.

const q = query(colRef, where(..), where(..), orderBy(..), limit(..))
const q = collection(..).where(..).orderBy(..).limit(..)

where filter

We filter documents based on a property. We request an exact value or one within a range.

Note: documents that do not possess the property are filtered out.

where(propertyName, operator, value)
where("id", "==", user.id)

where operators (strings):

<
<=
>
>=
==
!=
"in" // the property is equal to either A, B or C
"not-in" // the property is different from A, B and C.

operators when the field is an array:

"array-contains" // the array contains this value
"array-contains-any" // the array contains A, B or C..

order documents based on a field

We order documents based on the value of a given field. By default, it sorts documents so that the value is ascending. It's best to set the order explicitly rather than relying on the default ascending order.

orderBy(propertyName, orderDirection)
orderBy("postCount", "asc")
orderBy("postCount", "desc")

We can start from a given value, e.g. documents that have at least 10 posts (or more than 10 posts).

startAt(10)
startAfter(10)

limit the size of the query

get at most n documents

limit(5)

pagination: start at or after a given document, that acts as a cursor

When doing pagination, we store the document snapshot we received last, and provide it in the new query.

startAfter(docSnapshot) // start after the docSnapshot

// low use:
startAt(docSnapshot) // start at the docSnapshot (include it again)

run the query

getDocs implies we are to receive one or more documents:

getDocs(query)
query.get()

real-time listener

Set up a real-time listener on the query: we still receive a query snapshot with docs:

const unsub = onSnapshot(query, (qs) => {
    const documents = qs.docs.map((docSnapshot) => docSnapshot.data())
    setMessages(documents)
})

Create and update data

We assume we have a document reference (or perform a reference-less document creation)

document creation

Create a document with a controlled ID. The operation aborts if a document exists. (admin SDK only)

docRef.create(data)

Create a document with a randomized ID. By design a document won't exist there:

addDoc(collectionRef, data)
db.collection("message").add(data)

The client SDK doesn't support the controlled ID create() because it doesn't want to wait for a server response green-lighting the creation. We can still opt-for this pattern in a two-steps transaction where we first read and then write conditionally.

In the case where we accept to create destructively (or want to), aka override an existing document if needed, we use an upsert operation, supported both by the client SDK and the admin SDK. upsert has the same result regardless if a document already exists or not (idempotent).

setDoc(docRef, data)
docRef.set(data)

partial update

We assume the document already exists: we use the update pattern, so that it correctly fails if the document doesn't exist.

The update pattern expects a change object, with one or more fields to update. The omitted fields are left unchanged. We type the change as a Partial of the document, or we explicitly pick fields with Pick<>

updateDoc(docRef, data)
docRef.update(data)

we can mutate a single field: TODO

increment field

docRef.update({
    count: FieldValue.increment(),
})

// client?
const partialUserDoc = {
    activityScore: increment(1),
}

delete field

docRef.update({
    fleet: FieldValue.delete(),
})

server timestamp for field

this creates a trusted timestamp object. this is not needed when doing it from admin-sdk, because we may already trust a date created from the admin environment. Besides, it uses firebase specific Timestamp instead of a multi platform iso date string.

docRef.update({
    count: FieldValue.serverTimestamp(),
})

partial update with set

set comes with a merge option that changes its meaning: we are now providing a change object. The risk is to forget the merge option and override the document with a change object.

setDoc(docRef, data, { merge: true })
docRef.set(data, { merge: true })

delete document

docRef.delete()
deleteDoc(docRef)

timestamp value type (advanced)

Storing dates as ISO strings is simpler to reason about and is more portable.

As the Firestore database comes with a native value type for storing dates called timestamp, we describe using this pattern in this article. The Firestore SDK comes with a Timestamp type that represents a timestamp field.

storing timestamps

As we attempt to store data, the SDK detects Date and Timestamp fields and assumes we want to store them as timestamps.

const user = {
    createdAt: new Date(),
    createdAt_: Timestamp.now(),
}

When preparing data to be transported through an HTTP request, the SDK serializes Date and Timestamp objects to objects with a single timestampValue property.

{
  "createdAt": { "timestampValue": "2025-10-07T18:47:13.279000000Z" },
  "createdAt_": { "timestampValue": "2025-10-07T18:47:13.279000000Z" }
},

The database detects this pattern and stores those field as timestamps.

receiving timestamps

Timestamp is the designed type to represent database timestamps. As we receive timestamp fields from the database, the Firestore SDK instantiates them as Timestamp objects.

Firestore Security rules

We define the security rules in the Firebase console or in a firestore.rules file. Firebase doesn't bill requests denied by security rules.

rules version

rules_version = "2"

firestore scope

We start by scoping the rules to cloud.firestore

service cloud.firestore {
    // ...
    }

database scope

We then scope the rules to the current database. This is boilerplate code: we don't use the database wildcard variable.

match /databases/{database}/documents {
    // ...
}

set rules for a given collection

We set rules for a given collection. The wildcard variable is the ID of the requested document. We may, for example, compare it with the user's authentication uid.

match /users/{user_id}{
    	// ...
}

operations and condition

allow operation, operation: if condition;

operations

read
create
update
delete

authentication, user ID

If the user is not authenticated, request.auth is null. We may filter out unauthenticated users:

if request.auth != null;

The user's authentication uid is available as request.auth.uid:

request.auth.uid

green-light specific documents

We may-green light the document if its ID matches the user's uid.

    match /players/{player_id} {
         allow read: if request.auth.uid == player_id;
    }

Alternatively, we check if a field of the document matches the user's uid. For example, we check if the document's owner field matches the user uid. resource.data is the requested document.

    match /planets/{planet_id} {
         allow read: if request.auth.uid == resource.data.owner.id;
    }

Note: if auth is null, trying to read uid triggers a failsafe mechanism which denies the request. The same failsafe triggers if we attempt to read a field that doesn't exist on the requested resource.

get authorization information in a separate document

We may read a different document with get()

get(/databases/$(database) / documents / users / $(request.auth.uid)).data.rank

This unlocks a pattern where we read some authorization data in a different document, such as the user document, which would store the user's entitlements or ranks. This may not be a good architecture.

For example, to require a specific rank:

    match /characters/{character_id} {
         allow update: if get(/databases/$(database)/documents/users/$(request.auth.uid)).data.rank == "Game Master";
    }

For example, to enforce that the requested character's zone is the same as the player's character's zone

match /overworld_characters/{overworld_character} {
     allow read: if get(/databases/$(database)/documents/characters/$(request.auth.uid)).data.zone == resource.data.zone;
}

check requested document

resource.data.uid
resource.data.zone
resource.data.required_rank

data validation

The request's payload is exposed as request.resource. We may check if one of its field has the expected value.

request.resource.data.uid;
request.resource.data.age > 0
  // A) user sends a post that mentions himself as uid
  allow create : if request.auth.uid == request.resource.data.uid;

  // B) user modifies a post that mentions himself as uid
  // A) he must send a post that still mentions him as uid
  allow update,delete: if
  request.auth.uid == resource.data.uid
  &&
  request.auth.uid == request.resource.data.uid;

Storage

reference

object storage, file terminology and patterns

Firebase Storage is a wrapper around Google's Cloud Storage, a cloud storage service.

It is technically an object storage service because it stores immutable objects in a flat bucket, instead of files in a hierarchical filesystem.

Firebase Storage reintroduces the concept of files, folders and file hierarchy, primarily through the convention of using paths as object names, such as public/abc.png. The SDKs and docs use the term file instead of objects.

project's default bucket (implementation detail)

A Firebase project is given a default bucket. The bucket's URI serves to distinguish it from other ones. It is made of two components: a gs:// prefix and a domain name. The default bucket domain uses the project's name, which makes it globally unique. If we add another bucket, we must pick a globally unique name by ourselves:

"gs://<PROJECT-ID>.firebasestorage.app"
"gs://<PROJECT-ID>.appspot.com" // old default bucket URIs

"gs://<GLOBALLY-UNIQUE-ID>" // non-default bucket URI

Those are not HTTP URLs: no data is served if we force HTTP URLs out of them.

initialization and storage helper

The client SDK initializes with the default bucket unless we specify another one. storageService is a safer helper name since storage is already exported by Firebase:

const storageService = getStorage(app)
const storageService = getStorage(app, "gs://...")

File references and metadata

file path

A file is uniquely identified by its path in the bucket: it is unambiguous ID. The path includes the file extension.

file reference

We use references to interact with files. We build references by providing the file path:

const fileRef = ref(storage, "tts/2F14Izjv.mp3")
const fileRef = bucket.file("tts/2F14Izjv.mp3") // admin SDK

The file reference does not guarantee the file existence. The reference properties are of limited use (client SDK) and confusing:

ref.bucket // "abc.firebasestorage.app"
ref.fullPath // "tts/2F14Izjv.mp3"
ref.name // "2F14Izjv.mp3"

// computed references
ref.parent // ref(storage, "tts")
ref.root // ref(storage, "/")

bucket file's metadata

We fetch an existing file's metadata:

const metadata = await getMetadata(fileRef) // client SDK

It is a FullMetadata instance. We have:

  • the file size in bytes
  • the MIME type
  • the date of creation as an ISO string:
// repeat from fileRef
metadata.bucket
metadata.fullPath
metadata.name

// size, type and time
metadata.size // 1048576
metadata.contentType // "audio/mpeg"
metadata.timeCreated // "2026-01-04T12:34:56.789Z"

metadata.ref // file reference

List files and folders

folder and prefix terminology

The API describes folders as prefixes, but the docs also mention folders.

folder existence

A file, by its name alone, may create several nested folders because we read it as a path. For example, abc/def/hello.pdf creates two folders: abc and def. Those folders do not exist per se, but only because we follow this arbitrary convention.

In this convention, folders can't be empty: if there is a folder, there is a nested file.

get references at folder level

We build a reference to a folder and list its content. The list API trims the nested items (shallow list).

The list discriminates files (items) from folders (prefixes), but both are exposed as references (StorageReference_). The list exposes them as two arrays.

folderRef = ref(storage, "uploads")

const result = await list(directoryRef, { maxResults: 100 })
// const result = await listAll(directoryRef)

result.items // StorageReference[]
result.prefixes // StorageReference[]

Read, download files

general considerations

  • The client SDK enforces access-rules. Some functions allow the user to save a bearer URL which bypasses the security rules (one-off access control).
  • Download workflows are influenced by the browser requirements and restrictions.

get a HTTP URL on the client

We may request a read URL. Access control is performed when requesting the URL.

The returned URL is a bearer URL, which is not subject to access-control. We consume it outside the realm of the Storage SDK, as a regular URL.

Note: the URL remains valid unless manually revoked at the file level in the Firebase Console.

getDownloadURL(fileRef).then(url => ...)

consume a cross-origin HTTP URL on the client.

The URL is cross-origin. The challenges and patterns to consume a cross-origin URL are not specific to Firebase.

Buckets do not have permissive CORS headers by default, but we may add them on demand. As a reminder, CORS headers may whitelist one, several or all domains. We use gsutil or gcloud to whitelist our domain, if necessary (see the dedicated chapter).

The way we consume the URL determines if CORS headers are necessary.

  • The browser allows cross-origin URLs in media elements' src attribute (hot linking), with no CORS headers required.
  • The browser allows navigating to cross-origin URLs (basic browser behavior). For example, we navigate to an image in a new tab.
  • The browser doesn't allow background fetch of cross-origin resources unless explicit CORS headers are present on the server. This applies to fetch() and functions that rely on it.

download a Blob with the client SDK

A blob is an opaque object that we can transform to a local URL. When downloading a Blob with the SDK's getBlob():

  • access rules are enforced
  • CORS headers are required (it uses fetch() under the hood)

When we create a local (same-origin) URL out of the blob, we avoid the browser restrictions related to cross-origin URLs. It restores the ability to download content through a single click, without navigating to a different URL (see below).

getBlob(fileRef).then((blob) => {
    // create a local URL and trigger download imperatively
})

URLs in anchor tags and the download attribute

The download attribute on an anchor tag (<a href="" download>) aims to offer one-click downloads. The pattern only works for same-origin URLs or local URLs.

For cross-origin URLs, clicking the anchor tag triggers standard browser navigation instead: the browser navigates to the resource and shows its full URL.

create a local URL out of a blob (browser specific)

This examples creates a local URL, triggers download programmatically and revokes the local URL for clean up.

// 3. Create a local URL for the blob
const objectURL = URL.createObjectURL(blob)

// 4. Use the local URL to trigger the download
const link = document.createElement("a")
link.href = objectURL
link.download = img.id + ".png"
document.body.appendChild(link)
link.click()
document.body.removeChild(link)

// 5. Clean up by revoking the local URL
URL.revokeObjectURL(objectURL)

Upload data

client SDK

upload a Blob or a File

We prepare some data in a JavaScript Blob or File object, and upload it to the reference.

const result = await uploadBytes(fileRef, file)
  • The upload is a non-conditional upsert which overrides any existing file.
  • It makes the file immediately downloadable with the SDK read functions.
  • On success, we receive an UploadResult, which wraps the bucket file's metadata and the file reference.
result.metadata // FullMetadata
result.ref

(advanced) upload and track the progress

For each tick, we receive a snapshot. We may show the upload progress.

const uploadTask = uploadBytesResumable(ref, file)

uploadTask.on(
    "state_changed",
    /* on snapshot */
    function (snapshot) {
        // snapshot.bytesTransferred
        // snapshot.totalBytes
        // snapshot.state // "paused" | "running"
    },
    function (error) {},
    function () {
        /* on completion */
        getDownloadURL(uploadTask.snapshot.ref).then(/**/)
    }
)

admin SDK

upload a Node.js Buffer and make it downloadable

We prepare some data in a Node.js Buffer, and upload it to the reference.

await fileRef.save(imageBuffer, {
    resumable: false,
    metadata: {
        contentType: `image/png`,
        cacheControl: "public, max-age=31536000, immutable",
    },
})

Note: it doesn't make the file downloadable for clients: a client getDownloadURL() fails. This is because the underlying GC storage object is missing a Firebase-specific download token in its metadata.

To make it downloadable for clients, we use the admin SDK's getDownloadURL(). It attaches a download token to the underlying GC storage if needed. It also returns a bearer URL (that embeds this very access token and is not subject to security rules). We can store the bearer URL in a database, send it to the client, or discard it since the client can create the URL by itself with its own getDownloadURL().

const url = await getDownloadURL(fileRef)

We may invalidate an access token at any time, from the Firebase console. If we hardcoded bearer URLs in a database, they become invalid.

advanced: controlling the CG object token field

The token, if any, is at the firebaseStorageDownloadTokens metadata's field.

metadata: {
  firebaseStorageDownloadTokens: token
}

upload image example (admin SDK)

We upload an image and make it readable by clients. We may store the bypass URL.

// 1.0 create a file reference
const fileRef = bucket.file(`generated/${userID}/cat.png`)

// 1.1 create a Buffer object
const imageBuffer = base64ToBuffer(base64Data)

// 1.2 upload the Buffer object
await fileRef.save(imageBuffer, {
    resumable: false,
    metadata: {
        contentType: `image/png`,
        cacheControl: "public, max-age=31536000, immutable",
    },
})
//  1.3 make it readable by client SDKs (generate a token).
const url = await getDownloadURL(fileRef)

//  1.4 store the bypass URL (if applicable)
//  ...

Setting the bucket CORS header

Some read operations require the client's domain to be whitelisted by a CORS header. We list the authorized domains in a cors.json file and send it to Google through the CLI, with gcloud storage.

cors.json

[
    {
        "origin": ["https://imagetales.io", "http://localhost:5173"],
        "method": ["GET"],
        "maxAgeSeconds": 3600
    }
]

Send the json file:

gcloud storage buckets update gs://imagetales.firebasestorage.app --cors-file=cors.json

Describe the existing bucket CORS config

gcloud storage buckets describe gs://imagetales.firebasestorage.app --format="default(cors_config)"

read operations that require a CORS whitelist

Generally, those are browser reads relying on asynchronous (background) fetches rather than navigating to the URL through an anchor tag:

  • getBlob(fileRef) to get a Blob, which uses fetch() under the hood.
  • getBytes(fileRef) to get an ArrayBuffer, which uses fetch() under the hood.
  • using fetch() manually with a bearer URL we got with getDownloadURL() or that we stored somewhere before.

Cloud Functions

Cloud Functions is a serverless functions offering: we run code on servers operated by Google.

As it is a secure environment, we may run sensitive tasks: authenticate requests, perform server-side validation, use sensitive API keys, make sensitive writes to the database, and more.

The functions may trigger on spontaneous requests, or on events happening in the Firebase ecosystem, such as the registration of new users through Firebase Auth.

react to spontaneous requests: two options

The first option is to establish a bare-bones REST-API endpoint, called a HTTP function. It exposes a regular REST API endpoint, with an Express.js like API.

The second option is to establish a Callable function, a pattern that involves both a server SDK and a client SDK, which work hand in hand to provide a better developer experience, such as having built-in authentication support.

onRequest and onCall are the two helpers to define those function. They live in https.

import { onRequest, onCall } from "firebase-functions/https"

select and deploy functions

The main file exports the functions we want to deploy. The main file is the one we define as so in package.json:

{
    "main": "lib/index.js"
}

It is usually a barrel file that re-exports function implemented in their own file:

export { requestPlayer } from "./requestPlayer.js"

We deploy functions imperatively. We deploy one or all of them:

firebase deploy --only functions
firebase deploy --only functions:requestPlayer

To delete a function, we remove it from the main file and run the deploy command on it. The CLI detects it is missing and prompts us for confirmation.

define functions with TypeScript

The main file must be a JavaScript file. We use a workflow that transpiles to JS. The convention is to store TS code in src/ and transpile to lib/ so that the main file is lib/index.js.

The tsconfig.json file fits projects running on Node.js:

{
    "compilerOptions": {
        "module": "NodeNext",
        "moduleResolution": "nodenext",
        "outDir": "lib",
        "esModuleInterop": true,
        "noImplicitReturns": true,
        "noUnusedLocals": true,
        "sourceMap": true,
        "strict": true,
        "target": "es2017"
    },
    "compileOnSave": true,
    "include": ["src"]
}

We ask the transpile to be continuous with the watch flag. The emulator detects the change in the JS functions and update them on the fly:

tsc -w

admin SDK

Within functions, we interact with Firebase services such as databases and storage with the admin SDK. For example, we may work with the project's Firestore database:

import { initializeApp } from "firebase-admin/app"
import { getFirestore } from "firebase-admin/firestore"

const app = initializeApp()
const db = getFirestore(app)

Define Callable functions

The code we run in a Callable function has access to the user authentication status and the request's data.

Callable functions support streaming the response: we describe it in a dedicated section.

Overview and syntax

synopsis

onCall<ReqData, Promise<ResData>>(callback)
onCall<ReqData, Promise<ResData>>(options, callback)

the callback

The callback has access to the request (CallableRequest), which exposes auth and data.

We define the callback async so it returns a promise. The connection is kept open until the promise settles.

onCall<ReqData, Promise<ResData>>(async (request) => {
    request.auth // AuthData | undefined
    request.auth?.uid

    request.data // ReqData

    return { message: ".." } // ResData
})
  • auth is undefined when the request is unauthenticated. It has uid otherwise.
  • ReqData defines the data sent by clients.
  • ResData defines what the callback returns.

set options

onCall accepts an optional options object, of type CallableOptions, a subclass of GlobalOptions, as the first argument.

const options: CallableOptions = {
    concurrency: 1, // how many
    minInstances: 1,
    maxInstances: 1,
    region: "europe-west1",
}

concurrency sets how many request a single instance may process in parallel. By default, a single instance processes multiple requests in parallel. We set it to one if we prefer sequential request processing, assuming we also set maxInstances to 1.

minInstances default to 0. To avoid cold starts, we can set minInstances to 1 but it costs more as it is warm all the time.

Streaming version

The request has acceptsStreaming, which we read to check if the client supports streaming. When it does, the callback has access to a response argument, on which we call response.sendChunk().

Streaming the response means to send small chunks of data with sendChunk().

The third type argument defines what kind of chunk we stream. We usually stream string chunks.

onCall<T, U, V>(options, callback) // streaming Callable
onCall<ReqData, Promise<ResData>, StreamData>(async (request, response) => {
    response?.sendChunk("abc") // StreamData
    response?.sendChunk("def")

    return { message: ".." } // fallback
})

Patterns

halt and send an error immediately

We throw an HttpsError instance with a specific error code string which conforms to a predefined list. If we omit the error code, it defaults to an internal error kind of error.

throw new HttpsError("unauthenticated", "unauthenticated")

endpoint naming: request + action

using request denotes that the server may refuse to perform the action. It separates the request from the action proper, which may live in another file.

logger

todo

Callable v1 (deprecated)

define the function

functions.https.onCall(async (data, context) => {
    const auth = context.auth
    const message = data.message
    return { message: ".." }
})

the context object

The context object provides the authentication details, if any, such as the email, and the request metadata such as the IP address, or the raw HTTP request. It is of type CallableContext

check authentication

if (!context.auth) {
    throw functions.https.HttpsError("unauthenticated", "you must be authenticated")
}

Invoke Callable functions

We get a reference to the callable function, and call it like a regular function.

specify the firebase project and the region

Since a client may interact with Cloud Functions from separate Firebase projects, we specify the project we target. We do so indirectly, by providing the app helper, which already identifies the project.

Since a function may deploy across regions as separate regional instances, we specify which instance we target. We use one of the regional identifiers defined in the Callable options. If omitted, the client SDK targets us-central1, which errors if no instance runs there.

Note: we set the region identifier at the getFunctions() level. That is, the functions helper is region-aware:

const functions = getFunctions(app, "europe-west1")

get a handle over the Callable function

We provide the function's name to httpsCallable().

const requestPokemonCF = httpsCallable<ReqData, ResData>(functions, "requestPokemon")

invoke and handle the result

We provide a payload, if applicable, of type ReqData. The result is of type HttpsCallableResult<ResData>. If it succeeds, we access the data:

const result = await requestPokemonCF({ number: 151 })
result.data // ResData

HTTP functions

overview

Establish a bare-bones REST-API endpoint, called a HTTP function. We expose a regular REST API endpoint, with an Express.js like API.

We respond with JSON, HTML, or any other format.

export const sayHello = onRequest((req, res) => {
    res.send("Hello from Firebase!")
})

options argument

const options = {
    region: "europe-west1",
    cors: true,
}
export const sayHello = onRequest(options, (req, res) => {})

ExpressJS concepts and syntax

We may use middleware. Req and res objects have the shape of an expressJS req and res objects.

invoke the function: standard HTTP request.

This is not specific to Firebase. From a web client, we use fetch().

We use the POST method and can provide a payload

Functions on Auth events

Register functions that listen and react to Firebase Auth events.

Blocking functions

run a function before the user is added to Firebase Auth

The Authentication service waits for the function to complete successfully before adding the user. If the function throws, the user is not created, and an error is thrown to the client stating that the registration failed.

const options: BlockingOptions = {
    region: "europe-west1",
}

export const onRegisterBlocking = beforeUserCreated(options, async (event) => {
    const user = event.data // AuthUserRecord === UserRecord
    // user.uid
    // user.email
    if (user?.email?.includes("@hotmail.com")) {
        throw new HttpsError("invalid-argument", "don't use hotmail")
    }
    // create the user in the database first, then return
    await createDefaultDataForUser(user)
    return
})

Non blocking functions

The non blocking functions run after a user has been created or deleted by Firebase Auth.

Firebase Auth manages a list of users. It's best to mirror them in a database.

As of writing, there is no v2 version for the non blocking functions.

export const f = auth.user().onCreate(async (user) => {})
export const g = auth.user().onDelete(async (user) => {})

example: add the user to the Firestore database

import { region } from "firebase-functions/v1"
import { db } from "../firebaseHelper.js"

export const onRegisterNonBlocking = region("europe-west1")
    .auth.user()
    .onCreate(async (user) => {
        const { uid, email } = user
        // add user to Firestore
        await db.collection("users").doc(uid).set({
            uid,
            email,
        })
        return
    })

example: delete the user from the Firestore database

import { region } from "firebase-functions/v1"
import { db } from "../firebaseHelper.js"

export const onDeleteAccount = region("europe-west1")
    .auth.user()
    .onDelete(async function (user) {
        const { uid } = user
        await db.doc("users/" + uid).delete()
        return
    })

Functions on other events

on Firestore events

Cloud functions triggered by a database event are non-blocking: they run after the write.

sanitize data post-write

// v1 syntax
exports.myFunction = functions.firestore
    .document("my-collection/{docId}")
    .onWrite((change, context) => {
        /* ... */
    })

on Storage events

sanitize data post-upload

the user uploads a file to Firebase Storage. Sanitize data post-upload. For exampeL

exports.generateThumbnail = functions.storage.object().onFinalize(async (object) => {
    const fileBucket = object.bucket
    // The Storage bucket that contains the file.
    const filePath = object.name
    // File path in the bucket.
    const contentType = object.contentType
    // File content type.
    const metageneration = object.metageneration
    // Number of times metadata has been generated. New objects have a value of 1.
})

Create a thumbnail for an uploaded image.

JS Dates and Callable Functions

ISO strings are the better choice

When interacting with Callable Functions, it's best to represent dates as ISO strings. It is simple to reason about: the value and the type stay consistent on the client and on the server.

If we were to work with Date fields or even Firestore Timestamps fields, the value and the type are not consistent when both sending to the server and when receiving from the server. As such, it is a discouraged pattern.

In this article, we explain what happens if we send Date and Timestamp objects to Callable Functions or if we send them to the client from Callable functions. Before being sent, both are serialized to JSON.

sending to Callable Functions

Timestamp is a Firestore specific type and doesn't get a special treatment: it serializes to an object with seconds and nanoseconds (through toJSON()).

timestamp: { seconds: 1696751687, nanoseconds: 527000000 },

As for fields of type Date, they serialize to an ISO string (through toJSON()):

date: "2023-10-08T07:54:47.527Z"

We could technically instantiate a Timestamp or a Date:

new Timestamp(timestamp.seconds, timestamp.nanoseconds)
new Date(date)

sending from Callable functions

If we attempt to return a Date object, it serializes to an ISO string.

If we attempt to return a Timestamp object, it serializes to the internal representation, possible an object with _seconds and _nanoseconds. We should avoid this pattern.

Environment variables

firebase secrets pattern

we provide secrets through the CLI tool. We may then request some cloud functions to expose the secrets as Node.js process environment variables.

firebase functions:secrets:set ABC_API_KEY

.env file pattern

env-variables docs

we may set the env variables in a .env file

ABC_API_KEY=xxx

the .env should not be versioned. At function deployment, the firebase CLI tool sends the .env file to firebase servers.

read from env

read env within cloud functions

process.env

callable function: indicate the environment variables dependencies, that firebase should expose on the process.env

const options: CallableOptions = {
    region: "europe-west1",
    secrets: ["ABC_API_KEY"],
}

onCall<ReqData, Promise<ResData>>(options, async (request) => {
    const abcKey = process.env.ABC_API_KEY
})

onRequest

const options = { secrets: ["ABC_API_KEY"] }

onRequest(options, (req, res) => {
    process.env.ABC_API_KEY
})

debug secrets

gcloud secrets list --project <PROJECT_ID>

legacy secret management

Tell firebase to save a token/key on our behalf so that we can access it by reference in code, without writing the actual key in code and in git as a result.

firebase functions:config:set sendgrid.key="...." sendgrid.template="TEMP"

Read from Env

Firebase exposes the tokens/keys in an object we get through the config() method.

const API_KEY = functions.config().myKey

Debug Functions locally

start the functions emulator

We run the functions on their own (serve), or along with other emulated services.

npm run serve
firebase emulators:start --only functions

firebase emulators:start --import emulator-data --export-on-exit

Note: by default, the callable functions must be called with the client SDK.

invoke callable functions outside the client SDK

functions:shell starts the functions emulator and starts an interactive CLI shell from which we invoke callable functions with a payload.

firebase functions:shell
npm run shell # alternative

We provide the mandatory data property. It holds the payload:

requestArticles({ data: { name: "Lena" } })

We can also invoke them with curl

curl -s -H "Content-Type: application/json" \
  -d '{ "data": { } }' \
  http://localhost:5001/imgtale/europe-west1/request_articles

wire the client to the emulator

We redirect invocations towards the emulated functions, but only on localhost:

if (location.hostname === "localhost") {
    // ...
    connectFunctionsEmulator(functions, "localhost", 5001)
}

invoke emulated HTTP functions

We invoke HTTP functions with a HTTP request. The URL pattern is specific to the emulator.

http://localhost:5001/imgtale/europe-west1/request_articles

the deployed URL has a different pattern:

https://requestPlanet-x82jak2-ew.a.run.app

Schedule execution: Cron jobs

schedule periodic code execution

To define a schedule, we set both the periodicity and the timezone. We set the periodicity, we use strings such as every day 00:00 or every 8 hours. Then we also provide the callback function.

export const updateRankingsCRON = onSchedule(
    {
        schedule: "every day 00:00",
        timeZone: "Europe/Paris",
        region: "europe-west1",
    },
    async () => {
        // ...
    }
)

The former version (v1) uses a different API:

export const updateRankingsCRON = functions.pubsub
    .schedule("every 8 hours")
    .timeZone("Europe/Paris")
    .onRun(async (context) => {
        // ..
    })
earlymorning logo

© Antoine Weber 2026 - All rights reserved

Overview

Firebase allows developers to create a fully featured backend on top of servers and APIs operated by Google.

overall benefits

  • solid developer experience
  • it scales to world-scale use
  • generous free-tier and pay-as-you-go model
  • high quality docs, well supported by AI models
  • actively developed and maintained

main backend components covered in this document

  • authentication with Firebase Auth
  • database with Cloud Firestore
  • storage with Cloud Storage
  • serverless functions with Cloud Functions

focus of this document: web-centric

We create a backend for web-apps, and use the web-centric client SDKs. We default to TypeScript, and pick Node.js as the runtime for Cloud Functions.

CLI tool

The Firebase CLI tool enables several workflows:

  • Emulate the Firebase backend locally, to run it and debug it at no cost.
  • Scaffold the Cloud Functions' directory, and deploy Cloud Functions.
  • Submit secrets or API keys to Google, to make them available in Cloud Functions.
  • Add and Deploy security rules.
  • List the Firebase projects linked to the Google account.

the CLI executable

The firebase-tools npm package provides the firebase CLI executable.

npm install -g firebase-tools
firebase

Release notes

underlying Google account

Firebase projects are linked to a Google account.

firebase login
firebase login:list # prints current Google account
firebase logout

list projects and select one

firebase projects:list
firebase use imagetales

project configuration and scaffolding

The init command enables several workflows, among which:

  • scaffold the Cloud Functions directory
  • set up and configure the emulators
  • add security rules for Firestore and Cloud Storage
firebase init

help

  • print the list of Firebase commands.
  • print the details about a given command.
firebase help

firebase help emulators:start
firebase help deploy

deploy functions and manage secrets

firebase init
firebase functions:list

firebase deploy --only functions
firebase deploy --only functions:requestPlanet
firebase functions:secrets:access ABC_API_KEY
firebase functions:secrets:set ABC_API_KEY
firebase functions:secrets:destroy ABC_API_KEY

start emulators

firebase emulators:start
firebase emulators:start --import emulator-data --export-on-exit

We specify which emulators to run in firebase.json. We provide the port, or an empty object to use the default port. We scaffold this file with firebase init.

{
    "emulators": {
        "firestore": { "port": 8080 },
        "auth": { "port": 9099 },
        "functions": { "port": 5001 },
        "storage": { "port": 9199 },
        "ui": { "enabled": true }
    },
    "storage": { "rules": "storage.rules" },
    "firestore": {
        "rules": "firestore.rules",
        "indexes": "firestore.indexes.json"
    },
    "functions": [
        /* ... */
    ]
}

deploy security rules

The storage emulator requires storage access rules. We define Storage rules in storage.rules. We define Firestore rules in firestore.rules

firebase deploy --only storage
firebase deploy --only firestore:rules

gcloud: Google Cloud CLI tool

gcloud enables some operations not available in the firebase tool, such as listing secrets of a given project or describing a Storage bucket.

We call gcloud from the Google Cloud Console's Cloud Shell (it is pre-installed), or we install it locally from an archive provided by Google.

gcloud secrets list --project <PROJECT_ID>
gcloud storage buckets describe gs://abcd.firebasestorage.app

SDKs

Interact with the backend with the help of SDKs.

client SDKs

The client SDKs run on unprivileged clients, such as browsers. The JavaScript SDK primarily runs in browsers but can also run in a Node.js app that wants to act as an (unprivileged) client.

npm i firebase

admin SDK: privileged environments

The admin SDK is designed to run on secure, privileged environments.

The admin SDK authenticates itself against Google servers by using a privileged account called a service account. Service accounts are automatically created by Google, are scoped to a Firebase project and have specific entitlements. The admin SDK skips user-centric authentication and is not subject to security rules (which are designed to control untrusted requests).

We primarily use the admin SDK within Cloud Functions, an environment pre-configured by Google with the appropriate service account. The admin SDK detects it and uses it.

We use the Node.js admin SDK:

npm i firebase-admin

Cloud Functions SDK

We define Cloud Functions with the (Node.js) Cloud Functions SDK.

We have the package listed as a dependency after scaffolding the Cloud Functions directory with firebase init.

"firebase-functions": "^7.0.0",

Project setup and initialization

identify the Firebase project (client SDK)

The config object stores credentials to identify the Firebase project when interacting with Google servers. These credentials are not sensitive or confidential per se since they only serve to identify the project, and they are exposed on the client.

const firebaseConfig = {
    apiKey: "....",
    authDomain: ".....firebaseapp.com",
    projectId: "....",
    storageBucket: ".....firebasestorage.app",
    messagingSenderId: "....",
    appId: "....",
}

register one or more configs

We give the config to the client SDK. It returns a helper object that we initialize other services with.

const app = initializeApp(firebaseConfig)

When working with several Firebase projects, we get a helper for each project. The first helper has a "[DEFAULT]" internal string identifier. We must provide a string identifier for additional project we want to work with.

const app1 = initializeApp(firebaseConfig1)
const app2 = initializeApp(firebaseConfig2, "two")

note: On Cloud Functions, the environment is automatically configured: we don't have a config object at all, and we get a helper config-less.

const app = initializeApp()

Auth Overview

authenticate app users

The Auth SDK aims to authenticate users and notify the app of Auth events. It provides several authentication flows.

auth helper and reading currentUser across the app

We keep a reference to the auth helper to read currentUser. We also provide the helper when using some auth related functions.

const auth = getAuth(app)
auth.currentUser // User | null

currentUser starts as null. When the SDK has finished loading, and given that the user has logged-in, currentUser switches to a User instance.

As a User instance, It holds the user unique identifier (uid). Other properties may be empty:

currentUser.uid
currentUser.email
currentUser.phoneNumber
currentUser.displayName
currentUser.isAnonymous

react to authentication events

We register a callback on onAuthStateChanged, which Firebase runs on auth events. Firebase gives us a user object (of type User | null).

onAuthStateChanged(auth, (user) => {
    if (user) {
        // user.uid
    }
})

Auth events:

  • the auth SDK has finished loading and no user is authenticated

  • the user has registered (sign up)

  • the user has logged in (sign in)

  • the user has logged out (sign out)

Login occurs in three specific scenarios:

  • the user fills the standard login form or logs in through an identity provider (hard-login)
  • the user is recognized by the SDK and is logged in automatically (credentials stored in browser)
  • (canonically a registration) the user is automatically logged-in after a successful sign-up. Note: a single authentication event occurs.

React patterns

We make the authentication status part of the React state. For example, we work with a isSignedIn variable. We make the display of the authenticated area conditional on isSignedIn being true.

On page load, the Auth SDK is loading: If we initialize isSignedIn to false, it may not reflect the Auth reality, and may instantly switch to true once the SDK is loaded, which may trigger a UI flicker.

It's best to wait for the SDK to load before making any use of isSignedIn. As such, we track the loading state in a one-off state variable, which becomes true on the first authentication event. Only then do we read isSignedIn.

const [hasLoaded, setHasLoaded] = useState(false)
const [isSignedIn, setisSignedIn] = useState(false)

useEffect(() => {
    const unsub = onAuthStateChanged(auth, (user) => {
        setHasLoaded(true)
        setisSignedIn(Boolean(user))
    })
    return unsub
}, []) // subscribe once, subscribe automatically.

if (!hasLoaded) return null
if (!isSignedIn) return <Lobby />
return <Ingame />

sign out

sign out is consistent across all authentication flows:

signOut(auth)

Email-Password accounts

A provider that relies on collecting the user's email and password.

registration and hard-login

createUserWithEmailAndPassword(auth, email, password).then((credential) => {
    credential.user // User
})
signInWithEmailAndPassword(auth, email, password).then((credential) => {
    credential.user // User
})

send a password reset email

We ask Firebase to send a password-reset email to the provided email. We can customize the email content through the Firebase console:

sendPasswordResetEmail(auth, email)

email account's providerData (implementation detail)

Note: "password" is the providerId for the email-password provider.

{
    "providerData": [
        {
            "providerId": "password",
            "uid": "user@example.com",
            "email": "user@example.com",
            "displayName": null,
            "phoneNumber": null,
            "photoURL": null
        }
    ]
}

Identity Providers

We target users having accounts with external providers, such as Google accounts or Apple accounts.

select one or several providers

Note: We enable providers in the Firebase console.

const provider = new GoogleAuthProvider() // Google Provider

authentication flows

Alternative flows:

  • the user authenticates through a popup window.
  • the user authenticates through a redirect.

Flows handle both sign-in and sign-up: we describe a flow with a generic control label:

  • "Authenticate with Foo"
  • "Continue with Foo"

Both flows trigger an authentication event on success. They return a credential (UserCredential):

const credential = await signInWithPopup(auth, provider)
credential.user // User

Note: We can detect it is a new user through a helper method:

const userInfo = getAdditionalUserInfo(credential)
if (userInfo?.isNewUser) {
}

popup flow

The popup flow may fail if the browser doesn't allow popups.

const credential = await signInWithPopup(auth, provider)

redirect flow

The redirect flow relies on navigating to another page and navigating back.

It requires extra work unless the website is hosted on Firebase Hosting.

Anonymous account

Register an account with no personal information from the user.

signInAnonymously(auth)

The generated credentials are stored in the browser: the user cannot access the account from other devices, and cannot recover the account if credentials are lost.

When it comes to Auth-triggered Cloud Functions, the creation of an anonymous account:

  • triggers user().onCreate()
  • doesn't trigger the beforeUserCreated() blocking function (not supported yet).

check if the account is anonymous

On the client, we check isAnonymous:

auth.currentUser?.isAnonymous // true for anonymous accounts

In auth-triggered Cloud Functions, we read providerData (from the UserRecord).

export const onRegisterNonBlocking = auth.user().onCreate(async (user) => {
    user.providerData.length === 0 // true for anonymous accounts
})

convert to a non-anonymous account

We link to another provider. Since the user already exists (currentUser), we provide it to the link function.

Link to an email credential, after collecting the email address and password:

const cred = EmailAuthProvider.credential(email, password)
await linkWithCredential(auth.currentUser, cred)

Link to an identity provider, with a popup:

const provider = new GoogleAuthProvider()
const result = await linkWithPopup(auth.currentUser, provider)

Firestore

conceptual

Firestore is a NoSQL database that is most similar to MongoDB. It's made of collections and documents.

A collection is a set of documents.

A document is a set of fields. A document may contain 20k fields and 1 MiB of data. A field holds some primitive data (number, string...)

A reference serves to identify a collection or a document in the database. It doesn't guarantee the collection or document existence: It's merely a path (that may point to nothing).

references

packages and import paths

firebase-admin is a wrapper around @google-cloud/firestore. It has the same syntax and capabilities.

"firebase/firestore" // client SDK
"firebase/firestore/lite" // client SDK

"firebase-admin/firestore" // admin SDK

init and db helper object

We init a db object with the app helper, for use in Firestore-related functions.

const db = getFirestore(app)

Collection

Collection Reference

collection reference usage

We provide the collection reference to:

  • fetch all documents - getDocs(colRef)

  • build a query targeting the collection - query(colRef, filters..)

  • build a random-ID document reference - doc(colRef), or one that refers to a specific document - doc(colRef, docId)

  • add a document to the collection, with a random ID generated on the fly - addDoc(colRef, data).

build a collection reference

We use a path to identify the collection (uniquely). Root collections have a simple path, such as "users" (no starting slash). Sub-collection paths are made from several components.

We provide the path as:

  • a single string, with slash separators.

  • a sequence of strings, with no slash separators.

const colRef = collection(db, "users")
const colRef = collection(db, `users/${uid}/custom_list`)
const colRef = collection(db, "users", uid, "custom_list")
const colRef = db.collection(`users/${uid}/custom_list`) // sane

TypeScript: set the document's type at the collection level.

Collections are schema-less: they don't define the shape of their documents.

When receiving document data from the database, the client SDK checks the actual data and instantiates documents with it. The instantiated documents may be of any shape and may be different from one another.

The instantiated documents are typed as DocumentData, which is a loose type that doesn't provide information about the content.

We should provide a more precise type. We set it at the collection reference level. We do it through a type assertion:

const colRef = collection(db, "players") as CollectionReference<Player, Player>

Instantiated documents are now of type Player.

Converter

The SDK supports having two document types on the client:

CollectionReference<AppModelType, DbModelType>

DbModel, which is DocumentData by default, represents the shape instantiated by the SDK when receiving data.

If we want to transform instantiated documents into a different shape for use within the app, we use a converter.

AppModel, which is also DocumentData by default, is the type parameter that represents the type after conversion. We set it to whatever type the converter converts to.

Before sending to Firestore, the converter should transforms back AppModel to DbModel.

Transformation examples:

  • DbModel has a Timestamp field but we want AppModel to have a Date field.
  • We add properties to AppModel, that are not present on DbModel.

implement the converter

We transform the documents at the app boundaries:

  • upon receiving from Firestore (fromFirestore())
  • upon preparing to send to Firestore (toFirestore())

We define two functions and add them to the converter.

fromFirestore() takes the snapshot as instantiated:

fromFirestore(snapshot: QueryDocumentSnapshot<FirestoreWorkout>): Workout{
		// to client shape
		const firestoreWorkout = snapshot.data()
		const workout = { ...firestoreItem, date: firestoreItem.date.toDate()}
     return workout
    }

toFirestore() takes the object in its app-side shape.

toFirestore(workout: Workout) {
		// to database shape
        	return { ...workout, date: Timestamp.fromDate(workout.date)}
    }

We gather the transforms in the converter (FirestoreDataConverter). While the type may be inferred from the transforms, we may still add them for safety.

// FirestoreDataConverter<AppModel, DbModel>
const myConverter: FirestoreDataConverter<Workout, FirestoreWorkout> = {
    toFirestore() {},
    fromFirestore() {},
}

We attach it to the collection reference to let it type its documents.

const colRef = collection(db, "players").withConverter(conv)

Document

Document reference

The document reference identifies a document within the database, and embeds meta information:

docRef.id // "Nk....WQ"
docRef.path // "users/Nk....WQ"
docRef.parent // colRef

use document reference

We provide the reference for most CRUD operations:

  • create the document, or override an existing one (upsert): setDoc

  • update an existing document (it errors if the document isn't found): updateDoc

  • read the document: getDoc

  • delete the document: deleteDoc

build a document reference

The document's path identifies it uniquely. We provide the path as a single string or build it from string components.

const docRef = doc(db, "users", id) // string components
const docRef = doc(db, "users/Nk....WQ") // single string

const docRef = collectionRef.doc("NkJz11WQ") // admin sdk

Alternatively, we provide the collectionRef and the document ID, or the just the collectionRef. In the latter case, the SDK builds the ref with a randomized ID.

const docRef = doc(collectionRef, id)
const docRef = doc(collectionRef) // randomized ID

attempt to read document at reference

The get operation succeeds even if no document exists at the given reference: it is an attempt. Checking for a document existence is a valid use case.

The function returns a Document snapshot unconditionally, which may be empty:

getDoc(docRef) // DocumentSnapshot
docRef.get() // DocumentSnapshot

Document snapshot

The document snapshot is a wrapper that doesn't guarantee the document existence. It contains an instantiated DocumentData document or undefined.

Note: We may have provided a more specific type than DocumentData at the collection level as a type argument.

Note: data() is a function because it technically accepts some configuration.

docSnapshot.exists()
docSnapshot.data() // DocumentData | undefined

It also contains helpers and metadata.

docSnapshot.id // NkJz11WQ...7f
docSnapshot.ref // DocumentReference
docSnapshot.metadata // SnapshotMetadata

Query a specific field

docSnapshot.get("address.zipCode") // low use

real-time listener

Set up a real-time listener on a document reference:

const unsub = onSnapshot(docRef, (docSnapshot) => {
    docSnapshot.data() // DocumentData | undefined
})

Query

overview

A query aims to match documents based on a set of criteria instead of using pre-defined references.

the result of a query: a query snapshot

The SDK instantiates a query snapshot, a thin wrapper over a list of document snapshots (docs). The list is empty in case of no match.

The snapshots are of type QueryDocumentSnapshot, but the type has the same API surface than DocumentSnapshot.

querySnaptshot.docs // list of document snapshots
querySnaptshot.empty

A QueryDocumentSnapshot is guaranteed to have an underlying document at snapshot.data() (this is the difference from DocumentSnapshot).

const cats = querySnapshot.docs.map((snap) => snap.data())

a collection reference is technically a query

A collection ref is technically a query and can be used as so: in that case, we receive all documents:

getDocs(colRef) // getDocs(q)

colRef.get() // q.get()

build a query

build a query

We query documents that match some criteria. We request a specific order and limit the document count.

const q = query(colRef, where(..), where(..), orderBy(..), limit(..))
const q = collection(..).where(..).orderBy(..).limit(..)

where filter

We filter documents based on a property. We request an exact value or one within a range.

Note: documents that do not possess the property are filtered out.

where(propertyName, operator, value)
where("id", "==", user.id)

where operators (strings):

<
<=
>
>=
==
!=
"in" // the property is equal to either A, B or C
"not-in" // the property is different from A, B and C.

operators when the field is an array:

"array-contains" // the array contains this value
"array-contains-any" // the array contains A, B or C..

order documents based on a field

We order documents based on the value of a given field. By default, it sorts documents so that the value is ascending. It's best to set the order explicitly rather than relying on the default ascending order.

orderBy(propertyName, orderDirection)
orderBy("postCount", "asc")
orderBy("postCount", "desc")

We can start from a given value, e.g. documents that have at least 10 posts (or more than 10 posts).

startAt(10)
startAfter(10)

limit the size of the query

get at most n documents

limit(5)

pagination: start at or after a given document, that acts as a cursor

When doing pagination, we store the document snapshot we received last, and provide it in the new query.

startAfter(docSnapshot) // start after the docSnapshot

// low use:
startAt(docSnapshot) // start at the docSnapshot (include it again)

run the query

getDocs implies we are to receive one or more documents:

getDocs(query)
query.get()

real-time listener

Set up a real-time listener on the query: we still receive a query snapshot with docs:

const unsub = onSnapshot(query, (qs) => {
    const documents = qs.docs.map((docSnapshot) => docSnapshot.data())
    setMessages(documents)
})

Create and update data

We assume we have a document reference (or perform a reference-less document creation)

document creation

Create a document with a controlled ID. The operation aborts if a document exists. (admin SDK only)

docRef.create(data)

Create a document with a randomized ID. By design a document won't exist there:

addDoc(collectionRef, data)
db.collection("message").add(data)

The client SDK doesn't support the controlled ID create() because it doesn't want to wait for a server response green-lighting the creation. We can still opt-for this pattern in a two-steps transaction where we first read and then write conditionally.

In the case where we accept to create destructively (or want to), aka override an existing document if needed, we use an upsert operation, supported both by the client SDK and the admin SDK. upsert has the same result regardless if a document already exists or not (idempotent).

setDoc(docRef, data)
docRef.set(data)

partial update

We assume the document already exists: we use the update pattern, so that it correctly fails if the document doesn't exist.

The update pattern expects a change object, with one or more fields to update. The omitted fields are left unchanged. We type the change as a Partial of the document, or we explicitly pick fields with Pick<>

updateDoc(docRef, data)
docRef.update(data)

we can mutate a single field: TODO

increment field

docRef.update({
    count: FieldValue.increment(),
})

// client?
const partialUserDoc = {
    activityScore: increment(1),
}

delete field

docRef.update({
    fleet: FieldValue.delete(),
})

server timestamp for field

this creates a trusted timestamp object. this is not needed when doing it from admin-sdk, because we may already trust a date created from the admin environment. Besides, it uses firebase specific Timestamp instead of a multi platform iso date string.

docRef.update({
    count: FieldValue.serverTimestamp(),
})

partial update with set

set comes with a merge option that changes its meaning: we are now providing a change object. The risk is to forget the merge option and override the document with a change object.

setDoc(docRef, data, { merge: true })
docRef.set(data, { merge: true })

delete document

docRef.delete()
deleteDoc(docRef)

timestamp value type (advanced)

Storing dates as ISO strings is simpler to reason about and is more portable.

As the Firestore database comes with a native value type for storing dates called timestamp, we describe using this pattern in this article. The Firestore SDK comes with a Timestamp type that represents a timestamp field.

storing timestamps

As we attempt to store data, the SDK detects Date and Timestamp fields and assumes we want to store them as timestamps.

const user = {
    createdAt: new Date(),
    createdAt_: Timestamp.now(),
}

When preparing data to be transported through an HTTP request, the SDK serializes Date and Timestamp objects to objects with a single timestampValue property.

{
  "createdAt": { "timestampValue": "2025-10-07T18:47:13.279000000Z" },
  "createdAt_": { "timestampValue": "2025-10-07T18:47:13.279000000Z" }
},

The database detects this pattern and stores those field as timestamps.

receiving timestamps

Timestamp is the designed type to represent database timestamps. As we receive timestamp fields from the database, the Firestore SDK instantiates them as Timestamp objects.

Firestore Security rules

We define the security rules in the Firebase console or in a firestore.rules file. Firebase doesn't bill requests denied by security rules.

rules version

rules_version = "2"

firestore scope

We start by scoping the rules to cloud.firestore

service cloud.firestore {
    // ...
    }

database scope

We then scope the rules to the current database. This is boilerplate code: we don't use the database wildcard variable.

match /databases/{database}/documents {
    // ...
}

set rules for a given collection

We set rules for a given collection. The wildcard variable is the ID of the requested document. We may, for example, compare it with the user's authentication uid.

match /users/{user_id}{
    	// ...
}

operations and condition

allow operation, operation: if condition;

operations

read
create
update
delete

authentication, user ID

If the user is not authenticated, request.auth is null. We may filter out unauthenticated users:

if request.auth != null;

The user's authentication uid is available as request.auth.uid:

request.auth.uid

green-light specific documents

We may-green light the document if its ID matches the user's uid.

    match /players/{player_id} {
         allow read: if request.auth.uid == player_id;
    }

Alternatively, we check if a field of the document matches the user's uid. For example, we check if the document's owner field matches the user uid. resource.data is the requested document.

    match /planets/{planet_id} {
         allow read: if request.auth.uid == resource.data.owner.id;
    }

Note: if auth is null, trying to read uid triggers a failsafe mechanism which denies the request. The same failsafe triggers if we attempt to read a field that doesn't exist on the requested resource.

get authorization information in a separate document

We may read a different document with get()

get(/databases/$(database) / documents / users / $(request.auth.uid)).data.rank

This unlocks a pattern where we read some authorization data in a different document, such as the user document, which would store the user's entitlements or ranks. This may not be a good architecture.

For example, to require a specific rank:

    match /characters/{character_id} {
         allow update: if get(/databases/$(database)/documents/users/$(request.auth.uid)).data.rank == "Game Master";
    }

For example, to enforce that the requested character's zone is the same as the player's character's zone

match /overworld_characters/{overworld_character} {
     allow read: if get(/databases/$(database)/documents/characters/$(request.auth.uid)).data.zone == resource.data.zone;
}

check requested document

resource.data.uid
resource.data.zone
resource.data.required_rank

data validation

The request's payload is exposed as request.resource. We may check if one of its field has the expected value.

request.resource.data.uid;
request.resource.data.age > 0
  // A) user sends a post that mentions himself as uid
  allow create : if request.auth.uid == request.resource.data.uid;

  // B) user modifies a post that mentions himself as uid
  // A) he must send a post that still mentions him as uid
  allow update,delete: if
  request.auth.uid == resource.data.uid
  &&
  request.auth.uid == request.resource.data.uid;

Storage

reference

object storage, file terminology and patterns

Firebase Storage is a wrapper around Google's Cloud Storage, a cloud storage service.

It is technically an object storage service because it stores immutable objects in a flat bucket, instead of files in a hierarchical filesystem.

Firebase Storage reintroduces the concept of files, folders and file hierarchy, primarily through the convention of using paths as object names, such as public/abc.png. The SDKs and docs use the term file instead of objects.

project's default bucket (implementation detail)

A Firebase project is given a default bucket. The bucket's URI serves to distinguish it from other ones. It is made of two components: a gs:// prefix and a domain name. The default bucket domain uses the project's name, which makes it globally unique. If we add another bucket, we must pick a globally unique name by ourselves:

"gs://<PROJECT-ID>.firebasestorage.app"
"gs://<PROJECT-ID>.appspot.com" // old default bucket URIs

"gs://<GLOBALLY-UNIQUE-ID>" // non-default bucket URI

Those are not HTTP URLs: no data is served if we force HTTP URLs out of them.

initialization and storage helper

The client SDK initializes with the default bucket unless we specify another one. storageService is a safer helper name since storage is already exported by Firebase:

const storageService = getStorage(app)
const storageService = getStorage(app, "gs://...")

File references and metadata

file path

A file is uniquely identified by its path in the bucket: it is unambiguous ID. The path includes the file extension.

file reference

We use references to interact with files. We build references by providing the file path:

const fileRef = ref(storage, "tts/2F14Izjv.mp3")
const fileRef = bucket.file("tts/2F14Izjv.mp3") // admin SDK

The file reference does not guarantee the file existence. The reference properties are of limited use (client SDK) and confusing:

ref.bucket // "abc.firebasestorage.app"
ref.fullPath // "tts/2F14Izjv.mp3"
ref.name // "2F14Izjv.mp3"

// computed references
ref.parent // ref(storage, "tts")
ref.root // ref(storage, "/")

bucket file's metadata

We fetch an existing file's metadata:

const metadata = await getMetadata(fileRef) // client SDK

It is a FullMetadata instance. We have:

  • the file size in bytes
  • the MIME type
  • the date of creation as an ISO string:
// repeat from fileRef
metadata.bucket
metadata.fullPath
metadata.name

// size, type and time
metadata.size // 1048576
metadata.contentType // "audio/mpeg"
metadata.timeCreated // "2026-01-04T12:34:56.789Z"

metadata.ref // file reference

List files and folders

folder and prefix terminology

The API describes folders as prefixes, but the docs also mention folders.

folder existence

A file, by its name alone, may create several nested folders because we read it as a path. For example, abc/def/hello.pdf creates two folders: abc and def. Those folders do not exist per se, but only because we follow this arbitrary convention.

In this convention, folders can't be empty: if there is a folder, there is a nested file.

get references at folder level

We build a reference to a folder and list its content. The list API trims the nested items (shallow list).

The list discriminates files (items) from folders (prefixes), but both are exposed as references (StorageReference_). The list exposes them as two arrays.

folderRef = ref(storage, "uploads")

const result = await list(directoryRef, { maxResults: 100 })
// const result = await listAll(directoryRef)

result.items // StorageReference[]
result.prefixes // StorageReference[]

Read, download files

general considerations

  • The client SDK enforces access-rules. Some functions allow the user to save a bearer URL which bypasses the security rules (one-off access control).
  • Download workflows are influenced by the browser requirements and restrictions.

get a HTTP URL on the client

We may request a read URL. Access control is performed when requesting the URL.

The returned URL is a bearer URL, which is not subject to access-control. We consume it outside the realm of the Storage SDK, as a regular URL.

Note: the URL remains valid unless manually revoked at the file level in the Firebase Console.

getDownloadURL(fileRef).then(url => ...)

consume a cross-origin HTTP URL on the client.

The URL is cross-origin. The challenges and patterns to consume a cross-origin URL are not specific to Firebase.

Buckets do not have permissive CORS headers by default, but we may add them on demand. As a reminder, CORS headers may whitelist one, several or all domains. We use gsutil or gcloud to whitelist our domain, if necessary (see the dedicated chapter).

The way we consume the URL determines if CORS headers are necessary.

  • The browser allows cross-origin URLs in media elements' src attribute (hot linking), with no CORS headers required.
  • The browser allows navigating to cross-origin URLs (basic browser behavior). For example, we navigate to an image in a new tab.
  • The browser doesn't allow background fetch of cross-origin resources unless explicit CORS headers are present on the server. This applies to fetch() and functions that rely on it.

download a Blob with the client SDK

A blob is an opaque object that we can transform to a local URL. When downloading a Blob with the SDK's getBlob():

  • access rules are enforced
  • CORS headers are required (it uses fetch() under the hood)

When we create a local (same-origin) URL out of the blob, we avoid the browser restrictions related to cross-origin URLs. It restores the ability to download content through a single click, without navigating to a different URL (see below).

getBlob(fileRef).then((blob) => {
    // create a local URL and trigger download imperatively
})

URLs in anchor tags and the download attribute

The download attribute on an anchor tag (<a href="" download>) aims to offer one-click downloads. The pattern only works for same-origin URLs or local URLs.

For cross-origin URLs, clicking the anchor tag triggers standard browser navigation instead: the browser navigates to the resource and shows its full URL.

create a local URL out of a blob (browser specific)

This examples creates a local URL, triggers download programmatically and revokes the local URL for clean up.

// 3. Create a local URL for the blob
const objectURL = URL.createObjectURL(blob)

// 4. Use the local URL to trigger the download
const link = document.createElement("a")
link.href = objectURL
link.download = img.id + ".png"
document.body.appendChild(link)
link.click()
document.body.removeChild(link)

// 5. Clean up by revoking the local URL
URL.revokeObjectURL(objectURL)

Upload data

client SDK

upload a Blob or a File

We prepare some data in a JavaScript Blob or File object, and upload it to the reference.

const result = await uploadBytes(fileRef, file)
  • The upload is a non-conditional upsert which overrides any existing file.
  • It makes the file immediately downloadable with the SDK read functions.
  • On success, we receive an UploadResult, which wraps the bucket file's metadata and the file reference.
result.metadata // FullMetadata
result.ref

(advanced) upload and track the progress

For each tick, we receive a snapshot. We may show the upload progress.

const uploadTask = uploadBytesResumable(ref, file)

uploadTask.on(
    "state_changed",
    /* on snapshot */
    function (snapshot) {
        // snapshot.bytesTransferred
        // snapshot.totalBytes
        // snapshot.state // "paused" | "running"
    },
    function (error) {},
    function () {
        /* on completion */
        getDownloadURL(uploadTask.snapshot.ref).then(/**/)
    }
)

admin SDK

upload a Node.js Buffer and make it downloadable

We prepare some data in a Node.js Buffer, and upload it to the reference.

await fileRef.save(imageBuffer, {
    resumable: false,
    metadata: {
        contentType: `image/png`,
        cacheControl: "public, max-age=31536000, immutable",
    },
})

Note: it doesn't make the file downloadable for clients: a client getDownloadURL() fails. This is because the underlying GC storage object is missing a Firebase-specific download token in its metadata.

To make it downloadable for clients, we use the admin SDK's getDownloadURL(). It attaches a download token to the underlying GC storage if needed. It also returns a bearer URL (that embeds this very access token and is not subject to security rules). We can store the bearer URL in a database, send it to the client, or discard it since the client can create the URL by itself with its own getDownloadURL().

const url = await getDownloadURL(fileRef)

We may invalidate an access token at any time, from the Firebase console. If we hardcoded bearer URLs in a database, they become invalid.

advanced: controlling the CG object token field

The token, if any, is at the firebaseStorageDownloadTokens metadata's field.

metadata: {
  firebaseStorageDownloadTokens: token
}

upload image example (admin SDK)

We upload an image and make it readable by clients. We may store the bypass URL.

// 1.0 create a file reference
const fileRef = bucket.file(`generated/${userID}/cat.png`)

// 1.1 create a Buffer object
const imageBuffer = base64ToBuffer(base64Data)

// 1.2 upload the Buffer object
await fileRef.save(imageBuffer, {
    resumable: false,
    metadata: {
        contentType: `image/png`,
        cacheControl: "public, max-age=31536000, immutable",
    },
})
//  1.3 make it readable by client SDKs (generate a token).
const url = await getDownloadURL(fileRef)

//  1.4 store the bypass URL (if applicable)
//  ...

Setting the bucket CORS header

Some read operations require the client's domain to be whitelisted by a CORS header. We list the authorized domains in a cors.json file and send it to Google through the CLI, with gcloud storage.

cors.json

[
    {
        "origin": ["https://imagetales.io", "http://localhost:5173"],
        "method": ["GET"],
        "maxAgeSeconds": 3600
    }
]

Send the json file:

gcloud storage buckets update gs://imagetales.firebasestorage.app --cors-file=cors.json

Describe the existing bucket CORS config

gcloud storage buckets describe gs://imagetales.firebasestorage.app --format="default(cors_config)"

read operations that require a CORS whitelist

Generally, those are browser reads relying on asynchronous (background) fetches rather than navigating to the URL through an anchor tag:

  • getBlob(fileRef) to get a Blob, which uses fetch() under the hood.
  • getBytes(fileRef) to get an ArrayBuffer, which uses fetch() under the hood.
  • using fetch() manually with a bearer URL we got with getDownloadURL() or that we stored somewhere before.

Cloud Functions

Cloud Functions is a serverless functions offering: we run code on servers operated by Google.

As it is a secure environment, we may run sensitive tasks: authenticate requests, perform server-side validation, use sensitive API keys, make sensitive writes to the database, and more.

The functions may trigger on spontaneous requests, or on events happening in the Firebase ecosystem, such as the registration of new users through Firebase Auth.

react to spontaneous requests: two options

The first option is to establish a bare-bones REST-API endpoint, called a HTTP function. It exposes a regular REST API endpoint, with an Express.js like API.

The second option is to establish a Callable function, a pattern that involves both a server SDK and a client SDK, which work hand in hand to provide a better developer experience, such as having built-in authentication support.

onRequest and onCall are the two helpers to define those function. They live in https.

import { onRequest, onCall } from "firebase-functions/https"

select and deploy functions

The main file exports the functions we want to deploy. The main file is the one we define as so in package.json:

{
    "main": "lib/index.js"
}

It is usually a barrel file that re-exports function implemented in their own file:

export { requestPlayer } from "./requestPlayer.js"

We deploy functions imperatively. We deploy one or all of them:

firebase deploy --only functions
firebase deploy --only functions:requestPlayer

To delete a function, we remove it from the main file and run the deploy command on it. The CLI detects it is missing and prompts us for confirmation.

define functions with TypeScript

The main file must be a JavaScript file. We use a workflow that transpiles to JS. The convention is to store TS code in src/ and transpile to lib/ so that the main file is lib/index.js.

The tsconfig.json file fits projects running on Node.js:

{
    "compilerOptions": {
        "module": "NodeNext",
        "moduleResolution": "nodenext",
        "outDir": "lib",
        "esModuleInterop": true,
        "noImplicitReturns": true,
        "noUnusedLocals": true,
        "sourceMap": true,
        "strict": true,
        "target": "es2017"
    },
    "compileOnSave": true,
    "include": ["src"]
}

We ask the transpile to be continuous with the watch flag. The emulator detects the change in the JS functions and update them on the fly:

tsc -w

admin SDK

Within functions, we interact with Firebase services such as databases and storage with the admin SDK. For example, we may work with the project's Firestore database:

import { initializeApp } from "firebase-admin/app"
import { getFirestore } from "firebase-admin/firestore"

const app = initializeApp()
const db = getFirestore(app)

Define Callable functions

The code we run in a Callable function has access to the user authentication status and the request's data.

Callable functions support streaming the response: we describe it in a dedicated section.

Overview and syntax

synopsis

onCall<ReqData, Promise<ResData>>(callback)
onCall<ReqData, Promise<ResData>>(options, callback)

the callback

The callback has access to the request (CallableRequest), which exposes auth and data.

We define the callback async so it returns a promise. The connection is kept open until the promise settles.

onCall<ReqData, Promise<ResData>>(async (request) => {
    request.auth // AuthData | undefined
    request.auth?.uid

    request.data // ReqData

    return { message: ".." } // ResData
})
  • auth is undefined when the request is unauthenticated. It has uid otherwise.
  • ReqData defines the data sent by clients.
  • ResData defines what the callback returns.

set options

onCall accepts an optional options object, of type CallableOptions, a subclass of GlobalOptions, as the first argument.

const options: CallableOptions = {
    concurrency: 1, // how many
    minInstances: 1,
    maxInstances: 1,
    region: "europe-west1",
}

concurrency sets how many request a single instance may process in parallel. By default, a single instance processes multiple requests in parallel. We set it to one if we prefer sequential request processing, assuming we also set maxInstances to 1.

minInstances default to 0. To avoid cold starts, we can set minInstances to 1 but it costs more as it is warm all the time.

Streaming version

The request has acceptsStreaming, which we read to check if the client supports streaming. When it does, the callback has access to a response argument, on which we call response.sendChunk().

Streaming the response means to send small chunks of data with sendChunk().

The third type argument defines what kind of chunk we stream. We usually stream string chunks.

onCall<T, U, V>(options, callback) // streaming Callable
onCall<ReqData, Promise<ResData>, StreamData>(async (request, response) => {
    response?.sendChunk("abc") // StreamData
    response?.sendChunk("def")

    return { message: ".." } // fallback
})

Patterns

halt and send an error immediately

We throw an HttpsError instance with a specific error code string which conforms to a predefined list. If we omit the error code, it defaults to an internal error kind of error.

throw new HttpsError("unauthenticated", "unauthenticated")

endpoint naming: request + action

using request denotes that the server may refuse to perform the action. It separates the request from the action proper, which may live in another file.

logger

todo

Callable v1 (deprecated)

define the function

functions.https.onCall(async (data, context) => {
    const auth = context.auth
    const message = data.message
    return { message: ".." }
})

the context object

The context object provides the authentication details, if any, such as the email, and the request metadata such as the IP address, or the raw HTTP request. It is of type CallableContext

check authentication

if (!context.auth) {
    throw functions.https.HttpsError("unauthenticated", "you must be authenticated")
}

Invoke Callable functions

We get a reference to the callable function, and call it like a regular function.

specify the firebase project and the region

Since a client may interact with Cloud Functions from separate Firebase projects, we specify the project we target. We do so indirectly, by providing the app helper, which already identifies the project.

Since a function may deploy across regions as separate regional instances, we specify which instance we target. We use one of the regional identifiers defined in the Callable options. If omitted, the client SDK targets us-central1, which errors if no instance runs there.

Note: we set the region identifier at the getFunctions() level. That is, the functions helper is region-aware:

const functions = getFunctions(app, "europe-west1")

get a handle over the Callable function

We provide the function's name to httpsCallable().

const requestPokemonCF = httpsCallable<ReqData, ResData>(functions, "requestPokemon")

invoke and handle the result

We provide a payload, if applicable, of type ReqData. The result is of type HttpsCallableResult<ResData>. If it succeeds, we access the data:

const result = await requestPokemonCF({ number: 151 })
result.data // ResData

HTTP functions

overview

Establish a bare-bones REST-API endpoint, called a HTTP function. We expose a regular REST API endpoint, with an Express.js like API.

We respond with JSON, HTML, or any other format.

export const sayHello = onRequest((req, res) => {
    res.send("Hello from Firebase!")
})

options argument

const options = {
    region: "europe-west1",
    cors: true,
}
export const sayHello = onRequest(options, (req, res) => {})

ExpressJS concepts and syntax

We may use middleware. Req and res objects have the shape of an expressJS req and res objects.

invoke the function: standard HTTP request.

This is not specific to Firebase. From a web client, we use fetch().

We use the POST method and can provide a payload

Functions on Auth events

Register functions that listen and react to Firebase Auth events.

Blocking functions

run a function before the user is added to Firebase Auth

The Authentication service waits for the function to complete successfully before adding the user. If the function throws, the user is not created, and an error is thrown to the client stating that the registration failed.

const options: BlockingOptions = {
    region: "europe-west1",
}

export const onRegisterBlocking = beforeUserCreated(options, async (event) => {
    const user = event.data // AuthUserRecord === UserRecord
    // user.uid
    // user.email
    if (user?.email?.includes("@hotmail.com")) {
        throw new HttpsError("invalid-argument", "don't use hotmail")
    }
    // create the user in the database first, then return
    await createDefaultDataForUser(user)
    return
})

Non blocking functions

The non blocking functions run after a user has been created or deleted by Firebase Auth.

Firebase Auth manages a list of users. It's best to mirror them in a database.

As of writing, there is no v2 version for the non blocking functions.

export const f = auth.user().onCreate(async (user) => {})
export const g = auth.user().onDelete(async (user) => {})

example: add the user to the Firestore database

import { region } from "firebase-functions/v1"
import { db } from "../firebaseHelper.js"

export const onRegisterNonBlocking = region("europe-west1")
    .auth.user()
    .onCreate(async (user) => {
        const { uid, email } = user
        // add user to Firestore
        await db.collection("users").doc(uid).set({
            uid,
            email,
        })
        return
    })

example: delete the user from the Firestore database

import { region } from "firebase-functions/v1"
import { db } from "../firebaseHelper.js"

export const onDeleteAccount = region("europe-west1")
    .auth.user()
    .onDelete(async function (user) {
        const { uid } = user
        await db.doc("users/" + uid).delete()
        return
    })

Functions on other events

on Firestore events

Cloud functions triggered by a database event are non-blocking: they run after the write.

sanitize data post-write

// v1 syntax
exports.myFunction = functions.firestore
    .document("my-collection/{docId}")
    .onWrite((change, context) => {
        /* ... */
    })

on Storage events

sanitize data post-upload

the user uploads a file to Firebase Storage. Sanitize data post-upload. For exampeL

exports.generateThumbnail = functions.storage.object().onFinalize(async (object) => {
    const fileBucket = object.bucket
    // The Storage bucket that contains the file.
    const filePath = object.name
    // File path in the bucket.
    const contentType = object.contentType
    // File content type.
    const metageneration = object.metageneration
    // Number of times metadata has been generated. New objects have a value of 1.
})

Create a thumbnail for an uploaded image.

JS Dates and Callable Functions

ISO strings are the better choice

When interacting with Callable Functions, it's best to represent dates as ISO strings. It is simple to reason about: the value and the type stay consistent on the client and on the server.

If we were to work with Date fields or even Firestore Timestamps fields, the value and the type are not consistent when both sending to the server and when receiving from the server. As such, it is a discouraged pattern.

In this article, we explain what happens if we send Date and Timestamp objects to Callable Functions or if we send them to the client from Callable functions. Before being sent, both are serialized to JSON.

sending to Callable Functions

Timestamp is a Firestore specific type and doesn't get a special treatment: it serializes to an object with seconds and nanoseconds (through toJSON()).

timestamp: { seconds: 1696751687, nanoseconds: 527000000 },

As for fields of type Date, they serialize to an ISO string (through toJSON()):

date: "2023-10-08T07:54:47.527Z"

We could technically instantiate a Timestamp or a Date:

new Timestamp(timestamp.seconds, timestamp.nanoseconds)
new Date(date)

sending from Callable functions

If we attempt to return a Date object, it serializes to an ISO string.

If we attempt to return a Timestamp object, it serializes to the internal representation, possible an object with _seconds and _nanoseconds. We should avoid this pattern.

Environment variables

firebase secrets pattern

we provide secrets through the CLI tool. We may then request some cloud functions to expose the secrets as Node.js process environment variables.

firebase functions:secrets:set ABC_API_KEY

.env file pattern

env-variables docs

we may set the env variables in a .env file

ABC_API_KEY=xxx

the .env should not be versioned. At function deployment, the firebase CLI tool sends the .env file to firebase servers.

read from env

read env within cloud functions

process.env

callable function: indicate the environment variables dependencies, that firebase should expose on the process.env

const options: CallableOptions = {
    region: "europe-west1",
    secrets: ["ABC_API_KEY"],
}

onCall<ReqData, Promise<ResData>>(options, async (request) => {
    const abcKey = process.env.ABC_API_KEY
})

onRequest

const options = { secrets: ["ABC_API_KEY"] }

onRequest(options, (req, res) => {
    process.env.ABC_API_KEY
})

debug secrets

gcloud secrets list --project <PROJECT_ID>

legacy secret management

Tell firebase to save a token/key on our behalf so that we can access it by reference in code, without writing the actual key in code and in git as a result.

firebase functions:config:set sendgrid.key="...." sendgrid.template="TEMP"

Read from Env

Firebase exposes the tokens/keys in an object we get through the config() method.

const API_KEY = functions.config().myKey

Debug Functions locally

start the functions emulator

We run the functions on their own (serve), or along with other emulated services.

npm run serve
firebase emulators:start --only functions

firebase emulators:start --import emulator-data --export-on-exit

Note: by default, the callable functions must be called with the client SDK.

invoke callable functions outside the client SDK

functions:shell starts the functions emulator and starts an interactive CLI shell from which we invoke callable functions with a payload.

firebase functions:shell
npm run shell # alternative

We provide the mandatory data property. It holds the payload:

requestArticles({ data: { name: "Lena" } })

We can also invoke them with curl

curl -s -H "Content-Type: application/json" \
  -d '{ "data": { } }' \
  http://localhost:5001/imgtale/europe-west1/request_articles

wire the client to the emulator

We redirect invocations towards the emulated functions, but only on localhost:

if (location.hostname === "localhost") {
    // ...
    connectFunctionsEmulator(functions, "localhost", 5001)
}

invoke emulated HTTP functions

We invoke HTTP functions with a HTTP request. The URL pattern is specific to the emulator.

http://localhost:5001/imgtale/europe-west1/request_articles

the deployed URL has a different pattern:

https://requestPlanet-x82jak2-ew.a.run.app

Schedule execution: Cron jobs

schedule periodic code execution

To define a schedule, we set both the periodicity and the timezone. We set the periodicity, we use strings such as every day 00:00 or every 8 hours. Then we also provide the callback function.

export const updateRankingsCRON = onSchedule(
    {
        schedule: "every day 00:00",
        timeZone: "Europe/Paris",
        region: "europe-west1",
    },
    async () => {
        // ...
    }
)

The former version (v1) uses a different API:

export const updateRankingsCRON = functions.pubsub
    .schedule("every 8 hours")
    .timeZone("Europe/Paris")
    .onRun(async (context) => {
        // ..
    })