Overview
Firebase allows us to create a backend managed by Google.
overall benefits
- good developer experience
- it scales with virtually no limit
- extensive free-tier and affordable rates
- extensive docs and well supported by AI models.
- actively developed and maintained.
CLI tool
The Firebase CLI tool is important for some workflows:
- we may run an emulated backend locally to debug it with no cost incurred: authentication, database, and cloud functions.
- it is the main way to scaffold the cloud functions directory with boilerplate code
- it is the main way to deploy cloud functions, security rules, and websites if we host them on Firebase.
- we may browse and manage the Firebase projects linked to the Firebase account.
the executable
The executable is provided by the firebase-tools npm package. It is invoked as firebase:
npm i firebase-tools
firebase
one-time setup
firebase login
firebase login:list # indicates which account is logged-in
list projects and select one
firebase projects:list
firebase use imagetales
project configuration and scaffolding
firebase init
help
firebase help
firebase help deploy
firebase help | grep firestore
cloud functions
firebase functions:list
firebase functions:shell
firebase functions:secrets:access ABC_API_KEY
firebase functions:secrets:set ABC_API_KEY
firebase functions:secrets:destroy ABC_API_KEY
firebase deploy --only functions
firebase deploy --only functions:requestPlanet
emulators
start emulators:
firebase emulators:start
firebase emulators:start --import=./emulatorData/ --export-on-exit
We specify which emulators to run in firebase.json. In absence of such file, emulators will not run. The port is optional: we may omit it and provide an empty object instead.
{
"emulators": {
"firestore": { "port": 8080 },
"auth": { "port": 9099 },
"functions": { "port": 5001 },
"storage": { "port": 9199 },
"ui": { "enabled": true }
},
"storage": { "rules": "storage.rules" },
"functions": [
/* ... */
]
}
Google Cloud CLI tool
The tool unlocks some operations not available on the Firebase's one.
gcloud secrets list --project <PROJECT_ID>
SDKs
client SDK
The client SDK helps us interact with Firebase from unprivileged clients, usually a browser. It may run on Node.js but with lesser capabilities.
npm i firebase
admin SDK
The admin SDK helps us interact with Firebase from privileged environments configured with a special, privileged account called a service account. The service account belongs to the Firebase project and authenticates the environment.
The service account allows us to skip the user-centric authentication workflows and to bypass the client-centric security rules.
Firebase Cloud Functions run on an environment that Google pre-configures with the proper service account.
npm i firebase-admin
The admin and client SDK APIs are similar in shape when applicable.
firebase cloud functions
The Firebase CLI tool adds the firebase-functions package as a dependency automatically when we add cloud functions to a project.
npm i firebase-functions
Initialization
config object: credentials
The config object provides the credentials that identify the Firebase project it is part of. The credentials are sent when interacting with Firebase services.
const firebaseConfig = {
apiKey: "....",
authDomain: ".....firebaseapp.com",
projectId: "....",
storageBucket: ".....firebasestorage.app",
messagingSenderId: "....",
appId: "....",
}
the app object
we compute and store a reference to app to provide it to services such as auth or firestore.
const app = initializeApp(firebaseConfig)
environments managed by Google: skip the config
On a server managed by Google, the admin SDK is automatically configured, so we initialize the app without configuration:
const app = initializeApp() // on Google server
Auth Overview
client SDK: user authentication
The client-SDK provides workflows to authenticate the user against Firebase's authentication servers.
auth object
We store a reference to the auth object for use in various functions auth-related functions or to access currentUser . It is initialized with a given app object.
const auth = getAuth(app)
auth.currentUser // User | null
currentUser starts as null. When the SDK has finished initializing, it becomes the user object if the user has been logged-in automatically. Otherwise it stays null. It's also null when the user logs out.
User instance properties
uid uniquely identifies the user. Other properties are optional.
uid
email
phoneNumber
displayName
isAnonymous
listen to authentication state changes
We may listen to authentication events and update the app accordingly. Technically, we provide a callback to onAuthStateChanged, which is run by Firebase on auth event, and given a user object.
The state changes when:
-
the user registers (sign-up)
-
the user logs in (sign-in)
-
the user logs out (sign-out)
the login may occur in two ways:
- the user fills the login form and submits it successfully
- the user is recognized by the SDK and is logged in automatically
onAuthStateChanged(auth, (user) => {
const isAuthenticated = Boolean(user)
// setIsAuthenticated(isAuthenticated)
// setIsLoading(false)
// const userID = user.uid
})
React patterns
We store the authentication status (logged-in or not) in a state variable such as isAuthenticated.
We display the authenticated area when isAuthenticated is true.
At page launch, the authentication state is not determined.
We may add a second state variable to keep track of the loading state, such as isLoading. It starts as true and switches to false on the first authentication event.
// loading
isLoading // true
isAuthenticated // false
// loaded
isLoading // false
isAuthenticated // true or false
automatic login after registration
If the registration succeeds, the user is automatically logged-in. We may detect the change by listening to auth.currentUser. The user remains logged-in in a given browser until a manual logout.
others
sign-out works for any auth provider:
signOut(auth)
Email account
registration
createUserWithEmailAndPassword(auth, email, password)
.then((credential) => alert(credential.user.uid))
.catch((error) => alert(error.message))
.finally(() => setIsLoading(false))
login
signInWithEmailAndPassword(auth, email, password)
password reset
We may ask Firebase to send a password-reset email. We may customize it in the Firebase console:
sendPasswordResetEmail(auth, email)
Identity Provider
Google Provider
We have to enable the Google provider in the Firebase console first.
const provider = new GoogleAuthProvider()
/* */
signInWithPopup(auth, provider).then(() => {
/* we may reload the page */
window.location.reload()
})
Anonymous account
register as anonymous
signInAnonymously(auth)
check if the account is anonymous
the account is anonymous if there is no provider data:
if (auth.currentUser?.isAnonymous)
// auth.currentUser?.providerData.length === 0
convert to an email account
we build an email credential, then link it to the account.
// 1. build an email credential
const credential = EmailAuthProvider.credential(email, password)
// 2. link the credential to the existing account
linkWithCredential(auth.currentUser, credential)
Firestore
conceptual
Firestore is a noSQL database that is most similar to MongoDB. It's made of collections and documents.
A collection is a set of documents
A document is a set of fields. A document may contain 20k fields and 2 megabytes of data.
A reference serves to identify a tentative collection or a tentative document. They are tentative because the reference on its own does not guarantee the existence of what it is referring to.
references and release notes
- client SDK reference
- admin SDK reference - table of content
- admin SDK reference - googleapis
- admin SDK reference - cloud.google.com
import paths and init
"firebase/firestore" // client SDK
"firebase/firestore/lite" // client SDK
"firebase-admin/firestore" // admin SDK
We init a db object with the config, for use in firestore-related functions.
const db = getFirestore(app)
Collection
Collection Reference
collection reference usage
We provide the collection reference to:
-
fetch all documents -
getDocs(colRef) -
build a query targeting the collection -
query(colRef, filters..) -
build an empty document reference within the collection -
doc(colRef) -
add a document to the collection, by submitting data and delegating the id creation to the SDK -
addDoc(colRef, data). (it returns the reference)
We may also use it in combination with a document's ID to uniquely identify this document, such as in getDoc(colRef, id)
A documentRef already encapsulates the collection reference. As such, we don't need to provide both.
build a collection reference
The path to the collection identifies it uniquely. If the collection lives at the top, aka is not a sub-collection, the path is the same as the collection name, such as "users". Otherwise it's made of several components.
We indicate the collection's path, either as:
-
a single string, with no starting slash for the root collection
-
a sequence of string arguments with no slash.
const collectionRef = collection(db, "users")
const collectionRef = collection(db, `users/${uid}/custom_list`)
const collectionRef = collection(db, "users", uid, "custom_list")
// firebase-admin/firestore
const collectionRef = db.collection("users")
const collectionRef = db.collection(`users/${uid}/custom_list`)
Typescript: indicate the document's type at the collection level.
The SDK cannot infer the document type for a given collection. As such, we provide it at the collection reference level.
We may have a client specific type, one that results from a client-side transformation, that we do with a converter. In that case, we provide the client-side type first. Otherwise, we provide the same type twice.
const playersColRef = collectionRef as CollectionReference<Player, Player>
const playersColRef = collectionRef as CollectionReference<Player, FirestorePlayer>
const playersColRef = collectionRef.withConverter(myConverter)
Firestore Converter
declare transform between firestore shape and client shape.
We may want to have a shape for the client that is distinct from the one in the database. For example:
- a property is a Timestamp in the database, but we want a Date on the client.
- We want to add client only properties, such as helper properties.
In this case, we want Firebase to transform the document when
- receiving it from Firestore (
fromFirestore()) - sending it to Firestore (
toFirestore())
We define a converter, made of two functions.
fromFirestore takes a query document snapshot.:
fromFirestore(snapshot: QueryDocumentSnapshot<FirestoreWorkout>): Workout{
// transform to client shape
const firestoreWorkout = snapshot.data()
const workout = { ...firestoreItem, date: firestoreItem.date.toDate()}
return workout
}
toFirestore takes the local object.
toFirestore(workout: Workout) {
// prepare the object for firestore
return { ...workout, date: Timestamp.fromDate(workout.date)}
}
and put them in a data converter of type FirestoreDataConverter, which dictates both types, provided as type parameters following the shape of: FirestoreDataConverter<AppModel, DbModel>
const myConverter: FirestoreDataConverter<Workout, FirestoreWorkout> = {
toFirestore() {},
fromFirestore() {},
}
attach the converter to the collectionRef
We attach the converter with withConverter()
const collectionRef = collection(db, "users").withConverter(myConverter)
cloud functions may not use a converter
the Admin SDK does not provide withConverter()
Document
Document reference
the document reference identifies a document within a collection, and embeds information about the parent collection.
parent is the collection reference, id is the document's id. path is the document's absolute path as a string.
docRef.parent
docRef.id
docRed.path
use reference for CRUD operations
We provide the document reference for CRUD operations:
-
create or override (upsert) the document -
setDoc(ref, data) -
read the document -
getDoc(ref) -
update an existing document (it errors if the document is not found) -
updateDoc(ref, data) -
delete the document -
deleteDoc(ref)
build a document reference
The document's path identifies it uniquely. We provide it either as a single string or build it from multiple strings.
Alternatively, we may provide the collectionRef and the document ID.
We may also let Firestore creates a reference for us. In that case, we only need the parent collection's reference.
- compute a reference out of a path or ID.
const docRef = doc(collectionRef, id)
const docRef = doc(collectionRef) // ID is generated by firestore
const docRef = doc(db, "users", id) // collectionRef-less alternative
const docRef = doc(db, "users", "Nk....WQ")
const docRef = doc(db, "users/Nk....WQ") // path as single argument
const docRef = doc(db, "users/" + id)
const docRef = collectionRef.doc("John") // admin sdk
const docRef = collectionRef.doc("NkJz11WQ")
read single document
at ref, or at id
getDoc(docRef)
db.collection("messages").doc(id).get()
Document snapshot
The document snapshot is a wrapper that does not guarantee the document existence.
When we request a document, we receive a snapshot, which may be empty in the sense of not containing data, but it still includes metadata.
The data, if any, is accessible at snapshot.data(options).
We may provide a config, though it's unneeded most of the time.
We may subscribe to receive snapshots in realtime on document change.
work with the snapshot and the underlying document.
check document existence
docSnapshot.exists()
get the underlying document if it exists.
docSnapshot.data() // undefined if document doesn't exist
// typescript infered type: DocumentData | undefined
if (!data) { .. }
guard against document inexistence (redundant but needed by typescript).
other props and methods
requested id, provided ref, resulting metadata.
docSnapshot.id
docSnapshot.ref
docSnapshot.metadata
get a single property directly through the document snapshot.
docSnapshot.get("phoneNumber")
Query
The result of performing a query
Firestore answers to a query with a query snapshot, even when no match is found.
The query snapshot contains zero or more documents.
The documents, if any, come as a list of QueryDocumentSnapshot, similar to a list of DocumentSnapshot. All document that appear in the list do exist, aka they are not empty document references.
We check if the query snapshot is empty:
if (querySnaptshot.empty) {
// ...
}
We extract the list of documents
const cats = querySnaptshot.docs.map((docSnapshot) => docSnapshot.data())
a query to read several documents
- we provide a Query.
- We receive a querySnapshot
getDocs(query(..))
collection(..).where(..).orderBy(..).limit(..).get()
note on getting all documents
A bare collection ref is accepted as a query, and we receive all documents. It's rarely used as it's often better to:
specify some order/sorting criteria
limit the number of documents
getDocs(collectionRef)
collectionRef.get()
build a query
we query a specific part of the collection, aka the documents that match a criteria or a set of criteria.
const q = query(collectionRef, where(..), where(..), orderBy(..), limit(..))
const q = collection(..).where(..).orderBy(..).limit(..)
where
filter documents based on a property whose value must follow a rule. It filters out the documents that do not have this property.
where(propertyName, operator, value)
where("id", "==", user.id)
operator (provided as string)
<
<=
>
>=
==
!=
array-contains // the property is an array and contains the specified value
array-contains-any // the property is an array and contains at least one of the specified values. contains A or B or C ..
in // the property is equal to at least one of the specified value. is equal to A or B or C
not-in // the property must be different from all the specified values. not A and not B and not C.
orderBy
by default, ascending
sort the fetched document by a criteria
orderBy(propertyName, orderDirection)
orderBy("postCount") // ascending
orderBy("postCount", "asc")
orderBy("postCount", "desc")
limit
get at most n documents
limit(n)
limit(5)
startAt, startAfter
When doing pagingation, we either the store:
the count n of document we already fetched
the snapshot docSnapshot of the document we fetched last
startAt(n)
startAt(docSnapshot)
startAfter(n) // does not include n
startAfter(docSnapshot) // does not include docSnapshot
Run the query
We assume we will receive one or more documents, as such, we use the getDocs function
getDocs(query)
query.get()
Real time listener
we may target
a single document with its reference
several documents through a query
we provide a callback as for what to do with the query snapshot or the document snapshot.
query real time read
v9
const unsub = onSnapshot(query, (qs) => {
const documents = qs.docs.map((docSnapshot) => docSnapshot.data())
setMessages(documents)
})
document real time read
const unsub = onSnapshot(docRef, (docSnapshot) => {
console.log("Current data: ", docSnapshot.data())
})
Create and update data
document creation or override
we provide docRef
setDoc(docRef, data)
docRef.set(data)
reference-less document creation
addDoc(collectionRef, data)
db.collection("message").add(data)
ref creation utility
doc(db, "users", userID, "HSK1", word.character)
doc(collectionRef, word.character)
collection(db, "players")
document partial update
updateDoc(docRef, data)
setDoc(docRef, data, { merge: true })
docRef.set(data, { merge: true })
docRef.update(data)
delete document document
docRef.delete()
deleteDoc(docRef)
mutate a single field
we use the update() function along with a directive on a specific field.
increment field
docRef.update({
count: FieldValue.increment(),
})
delete field
tsxdocRef.update({
fleet: FieldValue.delete(),
})
server timestamp for field
this creates a trusted timestamp object. this is not needed when doing it from admin-sdk, because we may already trust a date created from the admin environment. Besides, it uses firebase specific Timestamp instead of a multi platform iso date string.
docRef.update({
count: FieldValue.serverTimestamp(),
})
Increment
const partialUserDoc = {
activityScore: increment(1),
}
Firestore Security rules
ruleset
rules_version = "2"
firestore scope
service cloud.firestore {
// ...
}
database scope
match /databases/{database}/documents {
// ...
}
collection's documents scope
set the rule for each document of a collection
match /collection/{document_id}{
// ...
}
match /posts/{post_id}{
// ...
}
operations and condition
allow operation, operation: if condition;
operations
read
create
update
delete
authentication and user ID
If the user is authenticated with Firebase Auth, the request comes with its Firebase Auth user ID (auth.uid).
This user ID is the only piece of information we may reasonably trust.
request.auth.uid
filtering out unauthenticated users
An attacker may attempt to reach the database without authentication. If authentication is required, the attempt fails, and does not trigger billing.
if request.auth.uid != null;
document green-flagged for the user ID
a document may have a flag that greenlights the user ID. It may be:
the document's ID, which has been made equal to the matching user ID
match /users/{user_id} {
allow read: if request.auth.uid == user_id;
}
a document's field, such as owner or owner_id, which has been made equal to the matching user ID
match /diaries/{diary_id} {
allow read: if request.auth.uid == resource.data.owner_id;
}
document green-flagged for a user's rank, status, or permission flag
Firebase Auth may not store a user's rank or set of permission flags.
Instead, we store the flags in a firestore document that matches the auth ID, and make sure the user may not tamper them. We may store them in the users collection.
users/xyz
{
email
uid
rank: "Game Master"
}
fetch the permission, compare with requested document
fetch rank of requester
get(/databases/$(database) / documents / users / $(request.auth.uid)).data.rank
enforce specific rank
match /characters/{character_id} {
allow update: if get(/databases/$(database)/documents/users/$(request.auth.uid)).data.rank == "Game Master";
}
complete rule
service cloud.firestore {
match /databases/{database}/documents {
// Match each document in the 'characters' collection
match /characters/{characterId} {
allow update: if get(/databases/$(database)/documents/users/$(request.auth.uid)).data.rank == "Game Master";
}
}
}
enforce user's rank matching requested document rank
match /overworld_characters/{overworld_character} {
allow read: if get(/databases/$(database)/documents/characters/$(request.auth.uid)).data.zone == resource.data.zone;
}
check requested document
resource.data.uid
resource.data.zone
resource.data.required_rank
check payload, data validation
request.resource.data.uid;
request.resource.data.age > 0
// A) user sends a post that mentions himself as uid
allow create : if request.auth.uid == request.resource.data.uid;
// B) user modifies a post that mentions himself as uid
// A) he must send a post that still mentions him as uid
allow update,delete: if
request.auth.uid == resource.data.uid
&&
request.auth.uid == request.resource.data.uid;
check if the request user is an admin
allow write:
if get(/databases/$(database)/documents/users/$(request.auth.uid))
.data.admin == true;
Storage
"firebase/storage"
"firebase-admin/storage"
Google Cloud Storage
Firebase Storage is a wrapper around Google Cloud Storage, which is a cloud storage service similar to Amazon S3.
It is an object storage service because it stores objects in a bucket.
A firebase project is given a default bucket.
objects in a bucket
we use the term object instead of file to emphasize some differences:
- objects are immutable. We may not edit an object. We must create a new, distinct object. This is different from files that are writable and may change over time. We can technically have a similar outcome by creating a new object and storing it with the same name. In that case, we may describe it as a new generation.
- objects live in a single, flat container, whereas files usually live in a vertical hierarchy of directories. The flat layout allows us to split up objects on different machines, which makes it easier for the provider to accommodate large storage needs.
- the flat container is called a bucket. Within a bucket, we may emulate a hierarchy by adding subpaths to file names such as public/ in
public/abc.png
default's bucket's identifier and URI.
The bucket's name or identifier is a domain that uses the project's name, such as projectname.firebasestorage.app. It is unique because the project's name itself is (globally) unique.
projectname.firebasestorage.app
imagetales.firebasestorage.app
projectname.appspot.com # old domain name
the extended name or identifier is a fully-fledged URI, and uses the gs:// prefix.
gs://imagetales.firebasestorage.app
the domain name and the URI do not have a matching HTTP endpoint. If we attempt to visit them, it errors. Instead, they serve as an identifier.
specify a bucket
The client SDK picks the default bucket by default.
If we are to use a distinct bucket, we must provide its URI.
const storage = getStorage(app, bucketURI)
const storage = getStorage(app, "gs://...")
bucket information
gcloud storage buckets describe gs://imagetales.firebasestorage.app
Object: reference and metadata
object reference
the object reference does not guarantee the object's existence. We use the reference to upload data and to download the object.
It hosts properties related to the object's location and related references.
// location (strings)
bucket
fullPath
name
// reference
parent
root
build a reference
We build the reference by providing the object's name, which is its full path within the bucket. That is, it includes the file extension and the parents' path.
ref(storage, objectName)
ref(storage, "tts/2F1ZqipqaG0Mg....KoFq4Izjv.mp3") // a file name
object metadata
firebase gathers the object's metadata in a FullMetadata object, which exposes properties such as the size, the contentType and the creation date.
// location (strings)
bucket
fullPath
name
// size, type and time
size
contentType
timeCreated
// objects reference
ref
we may fetch the metadata of an existing object:
const metadata = await getMetadata(fileRef)
List objects
We build a reference to a directory and call list or listAll(). We access the references in result.items
directoryRef = ref(storage, "uploads/")
const result = await listAll(directoryRef)
const result = await list(directoryRef, { maxResults: 100 })
result.items // StorageReference[]
Download an object
general considerations
A download only works if the object exists.
The access control varies based on the download method.
A download to the user computer may involve several steps.
download a blob (browser) and expose it as a local URL
Downloading a blob is invisible to the user and happens in the background. The browser expects the server to have a CORS header authorizing our domain as part of a whitelist, that we may set with gsutil. Otherwise, the browser refuses the operation.
If the browser allows it, it downloads the data and stores it into memory, as a Blob object.
getBlob(fileRef).then((blob) => {
// create the URL and trigger download imperatively
})
We may create a local URL that refers to the blob in memory and trigger a download from that local URL.
get a download HTTP URL
Alternatively, we may directly request a download URL from Firebase. Such URL remains valid unless explicitly revoked. Access control is done at URL creation time. Once the URL is created, it may be consumed without access control checks.
getDownloadURL(fileRef).then(url => ...)
The URL points to a Google domain. Such domain does not add CORS headers by default.
HTTP URL: browser specifics
Due to cross-origin, the browser prevents the user from doing a direct download when clicking on a download anchor tag. Instead, the browser performs a navigation, ignore any download attribute.
When we make use of the URL in a media element's src attribute, there is no download on the user's filesystem, so the browser display the content immediately even though it's cross-origin.
Due to the absence of the CORS header by default, the browser does not allow fetching the data in the background. We may enable the CORS header if such background download is important.
Upload an object
upload some data
The upload operation is an upsert: it creates the object if it does not exist, or override the object if it already exists.
When we build the object's reference, we may make the name unique to prevent an unintended override.
The browser SDK upload method accepts different kind of binary data: a Blob or a File object.
We provide the reference and the data to uploadBytes().
uploadBytes(fileRef, file).then((snapshot) => {})
upload's result
On success, we receive the object's metadata and reference, in a UploadResult object.
const result = await uploadBytes(fileRef, file)
result.metadata
result.ref
Upload task (advanced)
working with the upload task
we may track the upload progress. For each tick, we receive a snapshot.
const uploadTask = uploadBytesResumable(ref, file)
uploadTask.on(
"state_changed",
function (snapshot) {
let progress = (snapshot.bytesTransferred / snapshot.totalBytes) * 100
console.log("Upload is " + progress + "% done")
setProgress(progress)
switch (snapshot.state) {
case "paused":
console.log("Upload is paused")
break
case "running":
console.log("Upload is running")
break
default:
break
}
},
function (error) {},
function () {
getDownloadURL(uploadTask.snapshot.ref).then(function (downloadURL) {
console.log("File available at", downloadURL)
})
}
)
Setting the bucket CORS header
Some browser operations require the domain to be whitelisted by the CORS header. We setup firebase to whitelist our domain by providing a cors.json config file to gsutil or gcloud storage.
[
{
"origin": ["https://imagetales.io", "http://localhost:5173"],
"method": ["GET"],
"maxAgeSeconds": 3600
}
]
gsutil cors set cors.json gs://imagetales.firebasestorage.app
gcloud storage buckets update gs://imagetales.firebasestorage.app --cors-file=cors.json
gcloud storage buckets describe gs://imagetales.firebasestorage.app --format="default(cors_config)" # describe the bucket cors config
Cloud Functions
Overview
Cloud Functions is a way to run JS code through Node.js on a server managed by Google.
We assume the server being safe from tampering or leaks.
As such, we may perform server-side validation and trigger database mutations. We may store and use API keys and secrets.
Some functions are triggered on HTTP requests. Some functions are triggered by events that happen in the Firebase ecosystem such as the registration of a new user in Firebase Auth.
Two kinds of functions triggered by an HTTP request.
A Firebase HTTP function exposes a regular REST API endpoint. We must craft and send a valid
HTTP request on the client, parse it manually on the server, and parse back the response on the client.
A Firebase Callable function is a pattern where the client SDK and the server SDK do more work as they create and manage the HTTP messages (requests and responses) including managing the authentication data. They may conveniently deny the request if the user is unauthenticated.
Create a function
We build functions with the onCall or onRequest helpers and export them. The helpers live in the https sub-package.
import { onRequest, onCall } from "firebase-functions/https"
onCall function example:
export const requestExamples = onCall(async (request, response) => {
/* ... */
})
Activate functions
Firebase checks the package.json's main field to find the file that exports the functions. Such file is a JS file and is usually called index.js
{
"main": "lib/index.js"
}
A function is not activated unless exported by such file as a named export. It is usually a barrel file, that exports function from where they are implemented.
export { requestExamples } from "./requestExamples.js"
Write functions in typescript, deploy JS
We must provide functions in JS. The convention is to store TS source code in src/ and store the transpiled JS in lib/.
We may make the transpile continuous by watching the typescript files. This helps the emulator to update the cloud functions on the go since it watches the JS functions.
tsc --watch
admin SDK
functions may interact with firebase services such as databases and storage with a privileged role by using the admin SDK.
import { initializeApp } from "firebase-admin/app"
import { getFirestore } from "firebase-admin/firestore"
const app = initializeApp()
const db = getFirestore(app)
Define Callable functions
The callable function may verify the request comes from an authenticated user, perform work and send back a result or an error. It may stream the response.
Overview and syntax
synopsis
onCall<T, U, V>(options, callback)
provide a callback
We define a callback and provide it to onCall. We define it async so it returns a promise. The connection is kept open until the promise settles.
The callback receives the request object of type CallableRequest. Its main properties are auth and data. The callback also receives a response object that we may use to stream content.
onCall<ReqData, Promise<ResData>, StreamData>(async (request, response) => {
request.auth
request.data // ReqData
response?.sendChunk("abc") // StreamData is string here
response?.sendChunk("def") // StreamData is string here
return { example: "" }
})
request.authis of typeAuthData | undefined. It isundefinedwhen the request is unauthenticated. It hasuidandtoken.emailprops.request.data's type is provided by the first type parameter inonCall<T,U,V>(), hereReqData- request's
acceptsStreamingproperty indicates if we may stream the response.
The callback usually returns an object, whose type is provided by the second parameter in onCall<T, U, V>(), here ResData
type example
interface ReqData {
character: string
}
interface ResData {
example: string
}
type StreamData = string
provide options
we may give onCall an options argument object, of type
CallableOptions, a subclass of GlobalOptions, as the first argument. Options include region, concurrency, minInstances and maxInstances.
Concurrency sets how many processes a single instance may run in parallel. By default, it may run multiple processes in parallel, even when there is a single instance. We may limit the number of process to one, so that there the server only runs a single process at time, aka process a single request at a time.
const options: CallableOptions = {
concurrency: 1, // how many
minInstances: 1,
maxInstances: 1,
region: "europe-west1",
}
Cloud functions patterns
halt and send an error immediately
We throw an HttpsError instance with specific error code string which follows a predefined list. If we don't provide a specific error, it defaults to an internal error.
throw new HttpsError("unauthenticated", "unauthenticated")
lightweight file: dispatch work
A callable function file may delegate application logic to internal functions, in order to remain a lightweight and readable file.
endpoint naming: requestX
using request indicates the server may refuse to perform the action. It separates the request from the action proper. The action proper may be performed by an internal function.
we may call the endpoint requestVerb, and the internal function verb or performVerb.
For example, requestCreateFleetMission calls the createFleetMission internal function.
v1 (deprecated)
define the function
functions.https.onCall(async (data, context) => {
const auth = context.auth
const message = data.message
return { message }
})
the context object
The context object provides the authentication details, if any, such as the email, and the request metadata such as the IP address, or the raw HTTP request. It is of type CallableContext
check authentication
if (!context.auth) {
throw functions.https.HttpsError("unauthenticated", "you must be authenticated")
}
Invoke a Callable functions
We get a reference to the callable function, and call it like a regular function.
indicate the firebase project
since the client may work with different projects, we must provide the project's ID. We do so by providing the app object which already includes such ID.
indicate the region
A function may be deployed across multiple regions, so the client must specify which regions it wants.
The region identifier we provide must match the one defined in the callable options.
The option defaults to us-central1. If the function lives in europe, we must specify the europe-west1 location.
initialize the functions object
const functions = getFunctions(app, "europe-west1")
synthesize a caller function
We synthesize it through the httpsCallable function.
const requestPokemon = httpsCallable<ReqData, ResData>(functions, "requestPokemon")
The caller function returns (the promise of) a result, of type
HttpsCallableResult<ResData>. It is a wrapper over the data property it
embeds. It is not similar in shape and content to a HTTP response.
call and handle the result
we provide a payload, if applicable, of type ReqData
const result = await requestPokemon({ number: 151 })
The response's data, if any, lives in result.data. It has type of ResData
result.data
Alternatively, call it and handle result with then
requestPokemon({ number: 151 })
.then((result) => {
result.data
})
.catch((error) => {})
HTTP Cloud function (REST-API)
overview
Deal with HTTP requests and responses on the client and on the server in a fashion that resembles setting up REST endpoints with an Express.js like API.
We respond with JSON, HTML, or any other format.
export const sayHello = onRequest((req, res) => {
res.send("Hello from Firebase!")
})
options argument
const options = {
region: ""
cors: true,
}
export const sayHello = onRequest(options, (req, res) => {});
ExpressJS concepts and syntax
We may use middleware. Req and res objects have the shape of an expressJS req and res objects.
Build a REST request
This is not specific to firebase. From a web client, we may use fetch().
We may provide a payload and specify the method as POST.
Functions on Auth events
Functions that trigger on Auth events
Blocking functions
a cloud function runs before the user is created
The Authentication service waits for the function to complete successfully before adding the user. If the function throws, the error is thrown to the client and the user is not created, aka the registration fails.
const options: BlockingOptions = {
region: "europe-west1",
}
export const onRegisterBlocking = beforeUserCreated(options, async (event) => {
const user = event.data // AuthUserRecord === UserRecord
// user.uid
// user.email
if (user?.email?.includes("@hotmail.com")) {
throw new HttpsError("invalid-argument", "don't use hotmail")
}
// create the user in the database first, then return
await createDefaultDataForUser(user)
})
Non blocking functions
As of writing, there is no v2 equivalents for onCreate.
functions.auth
functions.auth.user()
Firebase Auth has its own list of users. By default, the database doesn't have the list of users. It's best to store the users in the database too.
return a promise
it's important to return a promise so that the cloud function can return immediately without waiting the actually promise to resolve or error.
new user through Auth => create user in database
import { auth } from "firebase-functions/v1"
export const onRegisterNonBlocking = auth.user().onCreate(async (user) => {
return db
.collection("users")
.doc(user.uid)
.set(
{
uid: user.uid,
email: user.email,
},
{ merge: true }
)
.then(() => {
console.log("user created in Firestore: " + user.email)
return true
})
.catch((error) => {
console.error("Error creating user: ", error)
return true
})
})
user deleted in auth => delete user in database
exports.deleteUserOnFirestore = functions.auth.user().onDelete((user) => {
return db
.collection("users")
.doc(user.uid)
.delete()
.then(() => {
console.log("user deleted from Firestore: " + user.email)
return true
})
.catch((error) => {
console.error("Error deleting user: ", error)
return true
})
})
Functions on Firestore events
functions.firestore
functions.firestore.document()
Validate the data sent by a user to the database. Make sure they don't send incorrect data. This helps preserve data integrity.
exports.myFunction = functions.firestore
.document("my-collection/{docId}")
.onWrite((change, context) => {
/* ... */
})
on Storage events
functions.storage
the user uploads a file to Firebase Storage
perform data validation and transform
React to completed files uploads:
exports.generateThumbnail = functions.storage.object().onFinalize(async (object) => {
const fileBucket = object.bucket
// The Storage bucket that contains the file.
const filePath = object.name
// File path in the bucket.
const contentType = object.contentType
// File content type.
const metageneration = object.metageneration
// Number of times metadata has been generated. New objects have a value of 1.
})
Create a thumbnail for an uploaded image.
Environment variables
firebase secrets pattern
we provide secrets through the CLI tool. We may then request some cloud functions to expose the secrets as Node.js process environment variables.
firebase functions:secrets:set ABC_API_KEY
.env file pattern
we may set the env variables in a .env file
ABC_API_KEY=xxx
the .env should not be versioned. At function deployment, the firebase CLI tool sends the .env file to firebase servers.
read from env
read env within cloud functions
process.env
callable function: indicate the environment variables dependencies, that firebase should expose on the process.env
const options: CallableOptions = {
region: "europe-west1",
secrets: ["ABC_API_KEY"],
}
onCall<ReqData, Promise<ResData>>(options, async (request) => {
const abcKey = process.env.ABC_API_KEY
})
onRequest
const options = { secrets: ["ABC_API_KEY"] }
onRequest(options, (req, res) => {
process.env.ABC_API_KEY
})
debug secrets
gcloud secrets list --project <PROJECT_ID>
v1 (deprecated)
Tell firebase to save a token/key on our behalf so that we can access it by reference in code, without writing the actual key in code and in git as a result.
firebase functions:config:set sendgrid.key="...." sendgrid.template="TEMP"
Read from Env
Firebase exposes the tokens/keys in an object we get through the config() method.
const API_KEY = functions.config().myKey
Debug Functions locally
run the emulator
We may run the functions on their own or all emulators.
npm run serve
firebase emulators:start --only functions
firebase emulators:start --import=./saved --export-on-exit
connect the client to the emulator
we opt-in to use the emulated cloud functions:
connectFunctionsEmulator(functions, "localhost", 5001)
connect a shell to the emulator
The command starts the emulator and connects a CLI REPL shell session to it so that we may trigger the functions from the CLI:
firebase functions:shell
npm run shell # alternative
call a Callable function from the CLI: we add the request's payload to the mandatory data property.
requestArticles({ data: { name: "Lena" } })
make direct HTTP requests to the emulated functions
The URL follows a specific pattern
http://localhost:5001/imgtale/europe-west1/request_articles
curl -s -H "Content-Type: application/json" \
-d '{ "data": { } }' \
http://localhost:5001/imgtale/europe-west1/request_articles
Cron jobs
schedule code execution
the onSchedule function expects:
- a time-interval string, that may start with every, such as
every day 00:00 - a callback function
v2
export const computeAndStoreRankingsCRON = onSchedule("every day 00:00", async () => {
// ...
})
v1
export const computeAndStoreRankingsCRON = functions.pubsub
.schedule("every 8 hours")
.onRun(async function (context) {
// ..
})
Deployment
deploy functions
firebase deploy --only functions
Timestamp and dates
list of interactions
-
client SDK ↔ cloud function
-
client SDK ↔ firestore database
-
admin SDK ↔ firestore database
list of representation
Date object: clientSDK, adminSDK
Timestamp object: clientSDK, adminSDK, firestore
ISO string: clientSDK, adminSDK, firestore
(client SDK, adminSDK) ↔ firestore database
The SDK (client or admit) transforms a Date field or a Timestamp field into a timestampValue ISO date string, before sending it to Firestore.
JS object to send
const docRef = await addDoc(collection(db, "users"), {
first: "John",
last: "Appleseed",
born: 1899,
currentDate: new Date(),
})
JSON payload: (double quotes stripped)
const payload = {
streamToken: "MA==",
writes: [
{
update: {
name: "projects/fir-9-demo-b106a/databases/(default)/documents/users/hreijX....bXuvinA0",
fields: {
first: { stringValue: "John" },
last: { stringValue: "Appleseed" },
born: { integerValue: "1899" },
currentDate: { timestampValue: "2023-10-07T18:47:13.279000000Z" },
},
},
currentDocument: { exists: false },
},
],
}
Alternative with Timestamp object:
import { Timestamp } from "firebase/firestore"
currentDate: Timestamp.now()
JSON payload
currentDate: { "timestampValue": "2023-10-07T19:20:40.438000000Z" }
As we retrieve the property from Firestore: the Client/Admin SDK receive it as a Timestamp JSON and instantiate it as a Timestamp object.
The received object date field is a Timestamp object regardless if we sent a Date object.
Alternative with ISO string:
currentDate: new Date().toISOString()
JSON payload
currentDate: {
stringValue: "2023-10-07T18:52:37.995Z"
}
client SDK ↔ cloud function
Send a Date or an ISO string date to the cloud function through request data: the cloud function receives an ISO string.
Send a Timestamp to the cloud function through request data: the cloud function receives a record object with two properties: seconds and nanoseconds.
date: '2023-10-08T07:54:47.527Z',
isoStringDate: '2023-10-08T07:54:47.527Z'
receivedTimestamp: { seconds: 1696751687, nanoseconds: 527000000 },
As such, the cloud function must manually instantiate either a Date or a Timestamp from the raw data it receives from the ISO string or from the Timestamp object.
instanciate a Timestamp on cloud functions
new Timestamp(receivedTimestamp.seconds, receivedTimestamp.nanoseconds)
instanciate a Date
new Date(date)
new Date(isoStringDate)
new Date(receivedTimestamp.seconds * 1000)
In the response
date
the server does not send Date fields to the client.
We must transform the date field to a number (getTime()) or as a string (toJSON()) on the server before sending it back, and then parse back to a Date manually on the client.
timestamp
the server sends a Timestamp field to the client, but it transforms it to a two-properties record that contains _seconds and _nanoseconds.
We may transform the timestamp field to a number (toMillis()), send it, then parse back to a Timestamp on the client.
GenKit
overview
Genkit enables interacting with LLM providers through a single, unified interface. It allows us to compose prompts together within a flow. It also offers a UI to test and debug prompts and flows.
packages
genkit
@genkit-ai/compat-oai
@genkit-ai/compat-oai/openai # import
dotenv #to embed tokens
#genkitx-openai #deprecated
https://github.com/firebase/genkit/tree/main/js/plugins/compat-oai
quick setup
get an ai provider with genkit()
import { genkit } from "genkit"
import openAI, { gpt4o } from "genkitx-openai"
const ai = genkit({
plugins: [openAI({ apiKey: process.env.OPENAI_API_KEY })],
model: gpt4o,
})
quick request with an inline prompt
The inline prompt may be used for simple usecases or for prototyping.
trigger the request with generate() or generateStream(). It expects an arguments object, with a prompt property, and other optional properties
const { text } = await ai.generate({
prompt: `Pick a number between 222 and 333`,
})
return text
stream the response
We call generateStream(). stream is an async iterable, so we may iterate over it with for await.
const { response, stream } = ai.generateStream({
prompt: `Pick a number between 222 and 333`,
})
for await (const chunk of stream) {
process.stdout.write(chunk.text) // debug in stdout
}
// we may still use the complete response
const completeText = (await response).text
.prompt Prompt file
overview
The prompt file, also called a dotprompt file, is a text file with a .prompt extension. It contains the prompt, along with some metadata. The metadata lives at the top, in the file's front matter.
prompt
We may use a fixed prompt or a template prompt:
- The fixed prompt is not customizable and is used as-is.
- The template prompt is a piece of text that includes slots, which makes it customizable. The caller provides data to customize the prompt.
Translate to english: { { character } }
front matter
the front matter follows a YAML syntax, and provides metadata.
---
name: requestExamples
model: openai/gpt-4o
config:
temperature: 0.5
---
front matter for a template prompt
If the prompt is to receive arguments, we define their name, type and default value in the front matter.
The arguments live in the input object. schema provides the types, while default provides the default values.
___
...
input:
schema:
character: string
default:
character: 我
---
Translate to english: {{character}}
reference the prompt file
We provide the prompt file path. prompt() returns an object of type ExecutablePrompt.
const requestExamples = ai.prompt("requestExamples")
run the prompt file
We may use the handle to trigger the request.
// wait for completion
const { text } = await requestExamples({ character: "菠萝" })
// stream
const { stream } = requestExamples.stream({ character: "菠萝" })
Typescript Prompt file
Using typescript to define the prompt allows us to type the arguments and benefit from autocomplete for the model name.
We define a prompt with definePrompt():
import { ai } from "../utils/ai"
import { gpt41 } from "genkitx-openai"
import { z } from "zod"
export const requestExamples = ai.definePrompt({
name: "requestExamples",
model: gpt41.name,
input: {
schema: z.object({
character: z.string(),
}),
},
prompt: `Make 3 example sentences for the character {{character}}.
Provide the english translation but not the pinyin. Do not comment`,
})
call the prompt
call the prompt and wait for the full response
const { text } = await requestExamples({ character: "菠萝" })
call the prompt and stream the response
const { stream, response } = requestExamples.stream({ character: "菠萝" })
for await (const chunk of stream) {
...
}