Sudograph

Sudograph is a GraphQL database for the Internet Computer (IC).

Its goal is to become the simplest way to develop applications for the IC. Developers start by defining a GraphQL schema using the GraphQL SDL. Once the schema is defined, it can be included within a canister and deployed to the IC. An entire relational database is generated from the schema, with GraphQL queries and mutations enabling a variety of CRUD operations, including advanced querying over relational data.

Sudograph should be considered somewhere between alpha and beta software.

Vision and motivation

Vision

My goal for Sudograph is for it to become the simplest, most flexible, and in the end most powerful way to develop internet applications. That's the grand vision.

To scope the vision down a bit, more realistically I want Sudograph to become the simplest, most flexible, and in the end most powerful way to develop Internet Computer applications.

Sudograph will achieve this vision by reading your GraphQL schema and generating an amazingly flexible, super simple, infinitely scalable, and extremely secure backend.

That's a lot of hyperbole! But that's also the future I want to build with Sudograph.

Achieving this vision begins with enabling CRUD operations within a single canister, then quickly moves into migrations, authorization, and multi-canister scaling.

Sudograph is currently transitioning from alpha into beta, and there's a long journey ahead.

Motivation

I have been developing with GraphQL since somewhere around 2016. It immediately struck me as a powerful way to deal with the complexities of managing the reading and writing of data for non-trivial internet applications. It has proven to me since that it is extremely versatile, and I have used it for a number of projects with a number of different underlying data sources.

Though GraphQL simplifies development, implementing it is not always simple. It still requires you to write a lot of code to bring your schema to life, in large part because GraphQL does not solve the problem of how data is read and written.

There are a number of libraries that have been developed in the recent past to address this problem. You can think of these as GraphQL generators. They attempt in one way or another to take a GraphQL schema and generate the code required to read and write data.

During my journey to find the perfect GraphQL generator, I went from Graphcool to Prisma to Graphback to finally writing a GraphQL generator from scratch. And there are other similar projects out there, like Hasura and PostGraphile.

No project has gotten it right yet, and each library has trade-offs and falls short of the vision of generating an amazingly flexible, super simple, infinitely scalable, and extremely secure backend from a GraphQL schema. It's a very difficult problem.

The Internet Computer may provide a very interesting solution.

The Internet Computer promises flexibility, simplicity, scalability, and security like no other platform before it. Combining the powers of GraphQL with the Internet Computer may be the best chance we have yet to achieve this vision.

Examples

Multiple examples are located in the examples directory in the Sudograph repository.

Here's a list of possibly useful examples:

Quickest of quick starts (new project)

This section is designed to get you going completely from scratch. It assumes you want to have a frontend, a GraphQL playground, and the graphql canister. If you instead wish to integrate Sudograph into an existing project, see the Existing project section.

If you've already got Node.js, npm, Rust, the wasm32-unknown-unknown Rust compilation target, and dfx 0.7.2 installed then just run the following commands:

mkdir my-new-project
cd my-new-project
npx sudograph
dfx start --background
dfx deploy

Once deployed, you can visit the following canisters from a Chromium browser:

If the above did not work, try the full installation steps in the actual quick start.

More information is available for local deployment and IC deployment.

Quick start (new project)

This section is designed to get you going completely from scratch. It assumes you want to have a frontend, a GraphQL playground, and the graphql canister. If you instead wish to integrate Sudograph into an existing project, see the Existing project section.

Prerequisites

You should have the following installed on your system:

  • Node.js
  • npm
  • Rust
  • wasm32-unknown-unknown Rust compilation target
  • dfx 0.7.2

If you already have the above installed, you can skip to Sudograph generate.

Run the following commands to install Node.js and npm. nvm is highly recommended and its use is shown below:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash

# restart your terminal

nvm install 14

Run the following command to install Rust and the wasm32-unknown-unknown target:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

rustup target add wasm32-unknown-unknown

Run the following command to install dfx 0.7.2:

# Sudograph has been tested against version 0.7.2, so it is safest to install that specific version for now
DFX_VERSION=0.7.2 sh -ci "$(curl -fsSL https://sdk.dfinity.org/install.sh)"

Sudograph generate

Start by making a new directory for your project. You then simply run the sudograph generate command:

mkdir my-new-project

cd my-new-project

npx sudograph

Deployment

Use the following links for more information about local deployment and IC deployment.

Existing project

The quickest of quick starts and quick start are both designed to get you started with an entire example project from scratch. If instead you wish to integrate Sudograph into an existing project, this section will help you to achieve that.

Basically you need to add a new Rust canister to your project and import and call the graphql_database procedural macro. If you're new to developing for the Internet Computer, you might want to check the documentation to get familiar with canister development. The detailed steps are listed out below, but looking at examples might also help a lot.

Make sure you at least have Rust, the wasm32-unknown-unknown Rust compilation target, and dfx 0.7.2 installed on your system. If you need help setting all of that up, look at the prerequisites section of the quick start.

There are a few basic steps to integrate Sudograph into an existing project:

  • Edit dfx.json in root directory
  • Add Cargo.toml to root directory
  • Create graphql canister crate
  • Create GraphQL schema
  • Import and call the graphql_database procedural macro
  • Create candid file
  • Deploy

Edit dfx.json in root directory

Add a new canister to your dfx.json in the root directory of your project. You can name the canister whatever you'd like, but to keep things simple we'll call the canister graphql. If you have other canisters already defined, just add the graphql canister. The canister defined below assumes a directory structure where there is a directory called canisters to contain each canister. You can change up the directory structure if you'd like, just change all of the paths appropriately.:

{
    "canisters": {
        "graphql": {
            "type": "custom",
            "build": "cargo build --target wasm32-unknown-unknown --package graphql --release",
            "candid": "canisters/graphql/src/graphql.did",
            "wasm": "target/wasm32-unknown-unknown/release/graphql.wasm"
        }
    }
}

Add Cargo.toml to root directory

In the root directory of your project create a Cargo.toml file with the following contents:

[workspace]
members = [
    "canisters/graphql",
]

[profile.release]
lto = true
opt-level = 'z'

Again this assumes your project has a canisters directory where the graphql canister will be defined. You can change the directory structure if you wish, just make sure to update this Cargo.toml file.

Create graphql canister crate

Create a new directory within canisters called graphql, and add a Cargo.toml file. It should look like the following:

[package]
name = "graphql"
version = "0.0.0"
edition = "2018"

[lib]
path = "src/graphql.rs"
crate-type = ["cdylib"]

[dependencies]
sudograph = "0.3.0"
ic-cdk = "0.3.0" # TODO this will go away once https://github.com/dfinity/candid/pull/249 is released

Within the canisters/graphql directory, now create a src directory. The canisters/graphql/src directory will contain your GraphQL schema, the Rust entrypoint to your graphql canister, and your candid file.

Create GraphQL schema

Within the canisters/graphql/src directory, create your schema.graphql file. The following is just an example:

type User {
    id: ID!
    username: String!
    blogPosts: [BlogPost!]! @relation(name: "User:blogPosts::BlogPost:author")
}

type BlogPost {
    id: ID!
    publishedAt: Date
    title: String!
    author: User! @relation(name: "User:blogPosts::BlogPost:author")
}

Import and call the graphql_database procedural macro

Within the canisters/graphql/src directory, create your graphql.rs file. The file should look like this:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");
}

This simply imports the graphql_database procedural macro from sudograph and then invokes it with the path to your schema.graphql file. This is where the magic happens and the database with CRUD queries and mutations are all generated.

Create candid file

Within the canisters/graphql/src directory, create your graphql.did file. The file should look like this:

service : {
    "graphql_query": (text, text) -> (text) query;
    "graphql_mutation": (text, text) -> (text);
}

The generated canister code will have created the two functions defined in graphql.did, but for now you'll need to create the candid file manually. Hopefully in the future it can be generated for you or abstracted away somehow.

graphql_query and graphql_mutation both take two parameters. The first parameter is the query or mutation string. The second parameter is a JSON string containing any variables for the query or mutation. Currently the second parameter is required, so just send an empty JSON object string "{}" if no variables are required for the query or mutation.

graphql_query and graphql_mutation both return the result of the query or mutation as a JSON string. Whatever client is consuming the query or mutation will then need to parse the JSON string to turn it into a language-level object. The Sudograph Client will do this for you in a JavaScript frontend.

Deploy

Use the following links for more information about local deployment and IC deployment.

Local deployment

Start up an IC replica and deploy:

# Open a terminal and run the following command to start a local IC replica
dfx start

# Alternatively to the above command, you can run the replica in the background
dfx start --background

# If you are running the replica in the background, you can run this command within the same terminal as the dfx start --background command
# If you are not running the replica in the background, then open another terminal and run this command from the root directory of your project
dfx deploy

Make sure to run dfx deploy for your first deploy. For quicker deployments after the first, you can run dfx deploy graphql if you've only changed your schema or the Rust code within the graphql canister. dfx deploy graphql will only deploy the graphql canister, which contains the generated database.

playground canister

Start executing GraphQL queries and mutations against your database by going to the following URL in a Chromium browser: http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000.

frontend canister

View a simple frontend application that communicates with the graphql canister by going to the following URL in a Chromium browser: http://rrkah-fqaaa-aaaaa-aaaaq-cai.localhost:8000.

command line

You can execute queries against the graphql canister from the command line if you wish:

# send a query to the graphql canister
dfx canister call graphql graphql_query '("query { readUser { id } }", "{}")'

# send a mutation to the graphql canister
dfx canister call graphql graphql_mutation '("mutation { createUser(input: { username: \"lastmjs\" }) { id } }", "{}")'

Sudograph Client

See the Sudograph Client documentation for more information. Here's a simple example of using Sudograph Client from a JavaScript frontend:

import {
    gql,
    sudograph
} from 'sudograph';

const {
    query,
    mutation
} = sudograph({
    canisterId: 'ryjl3-tyaaa-aaaaa-aaaba-cai'
});

async function getUserIds() {
    const result = await query(gql`
        query {
            readUser {
                id
            }
        }
    `);

    const users = result.data.readUser;

    return users;
}

Rust canister

If you want to call into the graphql canister from another Rust canister, first you update the dfx.json and then implement your rust canister.

Make sure to include the graphql canister as a dependency to your rust canister in dfx.json:

{
    "canisters": {
        "graphql": {
            "type": "custom",
            "build": "cargo build --target wasm32-unknown-unknown --package graphql --release",
            "candid": "canisters/graphql/src/graphql.did",
            "wasm": "target/wasm32-unknown-unknown/release/graphql.wasm"
        },
        "playground": {
            "type": "assets",
            "source": ["canisters/playground/build"]
        },
        "rust": {
            "type": "custom",
            "build": "cargo build --target wasm32-unknown-unknown --package rust --release",
            "candid": "canisters/rust/src/rust.did",
            "wasm": "target/wasm32-unknown-unknown/release/rust.wasm",
            "dependencies": [
                "graphql"
            ]
        }
    }
}

And then in your rust canister:


#![allow(unused)]
fn main() {
use ic_cdk;
use ic_cdk_macros;

#[ic_cdk_macros::import(canister = "graphql")]
struct GraphQLCanister;

#[ic_cdk_macros::query]
async fn get_all_users() -> String {
    let result = GraphQLCanister::graphql_query(
        "
            query {
                readUser {
                    id
                }
            }
        ".to_string(),
        "{}".to_string()
    ).await;

    let result_string = result.0;

    return result_string;
}
}

Motoko canister

If you want to call into the graphql canister from a Motoko canister:

import Text "mo:base/Text";

actor Motoko {
    let GraphQLCanister = actor "rrkah-fqaaa-aaaaa-aaaaq-cai": actor {
        graphql_query: query (Text, Text) -> async (Text);
        graphql_mutation: (Text, Text) -> async (Text);
    };

    public func get_all_users(): async (Text) {
        let result = await GraphQLCanister.graphql_query("query { readUser { id } }", "{}");

        return result;
    }
}

Wasm binary optimization

If the replica rejects deployment of your canister because the payload is too large, you may need to optimize your Wasm binary.

IC deployment

Before deploying to the Internet Computer you should understand that Sudograph is alpha/beta software. There are missing features and potential bugs. There is also no way to easily migrate data (if you change your schema, you'll need to either delete your state and start over or manually make changes to the Sudograph data structures). But if you must deploy to the IC, here is the command:

dfx deploy --network ic

Wasm binary optimization

If the replica rejects deployment of your canister because the payload is too large, you may need to optimize your Wasm binary.

Wasm binary optimization

At some point your compiled Rust Wasm binary will grow too large and will be rejected by the canister on deploy. This could happen because the Rust source code that you've written has grown too large, or because your schema has grown too large. A large schema will lead to a large amount of generated Rust code.

To temporarily overcome this issue (only so much can be done during optimization, eventually the binary will be too big and the Internet Computer will need to address that), you can optimize your Rust Wasm binary.

Manual optimization

To do this manually, in the root of your directory run the following command once to install the optimizer:

cargo install ic-cdk-optimizer --root target

You should also change your dfx.json file from:

{
    "canisters": {
        "graphql": {
            "type": "custom",
            "build": "cargo build --target wasm32-unknown-unknown --package graphql --release",
            "candid": "canisters/graphql/src/graphql.did",
            "wasm": "target/wasm32-unknown-unknown/release/graphql.wasm"
        }
    }
}

to:

{
    "canisters": {
        "graphql": {
            "type": "custom",
            "build": "cargo build --target wasm32-unknown-unknown --package graphql --release",
            "candid": "canisters/graphql/src/graphql.did",
            "wasm": "target/wasm32-unknown-unknown/release/graphql-optimized.wasm"
        }
    }
}

The only thing that changed was the wasm property of the graphql canister object, and it changed from "wasm": "target/wasm32-unknown-unknown/release/graphql.wasm" to "wasm": "target/wasm32-unknown-unknown/release/graphql-optimized.wasm".

Each time you run dfx deploy or dfx deploy graphql, you will need to run the following command after:

./target/bin/ic-cdk-optimizer ./target/wasm32-unknown-unknown/release/graphql.wasm -o ./target/wasm32-unknown-unknown/release/graphql-optimized.wasm

Automatic optimization

It can be tedious to have to run the above command manually after each dfx deploy. If you wish to figure out how to use cargo scripts of some kind you can do that. You could also use make or bash or some other build process or scripting system.

Another way is to adopt npm scripts. Your package.json could look something like this:

{
    "scripts": {
        "build": "cd canisters/playground && npm install && npm run build && cd ../frontend && npm install && npm run build",
        "dfx-deploy": "npm run dfx-build-graphql && npm run dfx-optimize-graphql && dfx deploy",
        "dfx-deploy-graphql": "npm run dfx-build-graphql && npm run dfx-optimize-graphql && dfx deploy graphql",
        "dfx-build-graphql": "cargo build --target wasm32-unknown-unknown --package graphql --release",
        "dfx-optimize-graphql": "./target/bin/ic-cdk-optimizer ./target/wasm32-unknown-unknown/release/graphql.wasm -o ./target/wasm32-unknown-unknown/release/graphql-optimized.wasm"
    }
}

Then instead of running dfx deploy or dfx deploy graphql you would run npm run dfx-deploy or npm run dfx-deploy-graphql.

In the future it would be nice for the dfx.json to allow for some sort of build scripts, which would make this process less messy. There is an open forum post about this here.

Sudograph Client

NOTICE: Considering that custom resolvers are temporarily disabled, and that it is not yet possible to provide per-field authorization from your schema, the Sudograph Client probably isn't very useful right now. You will most likely not want to execute GraphQL queries from the frontend until certain candid and authorization issues are worked out. Instead you'll want to create custom functions in your Rust or Motoko canisters that provide their own authorization and call into your graphql canister. See here and here for some information and examples.

The Sudograph Client is a frontend JavaScript/TypeScript library that provides a convenient API for interacting with your deployed graphql canister. It is an alternative to using agent-js directly, and currently works only for the frontend (Node.js support will come later).

Installation

Install Sudograph Client into your frontend project with npm install sudograph.

Use

In addition to the code on this page, many of the examples have frontend projects that show Sudograph Client in use.

For our example, let's imagine we have some sort of frontend UI component defined in a JavaScript file called component.js. You could import and prepare Sudograph Client for use as follows:

// component.js

import {
    gql,
    sudograph
} from 'sudograph';

const {
    query,
    mutation
} = sudograph({
    canisterId: 'ryjl3-tyaaa-aaaaa-aaaba-cai'
});

Above we import the gql tag and the sudograph function. The gql tag will be used for queries later on. To prepare for query or mutation execution, we call the sudograph function and pass in an options object. In this case, we simply put in the canister id of our graphql canister. The options object looks like this in TypeScript:

import { Identity } from '@dfinity/agent';

export type Options = Readonly<{
    canisterId: string;
    identity?: Identity;
    queryFunctionName?: string;
    mutationFunctionName?: string;
}>;

query

If we want to execute a query, we would do so as follows. Imagine defining a function to return all user ids:

// component.js

async function getUserIds() {
    const result = await query(gql`
        query {
            readUser {
                id
            }
        }
    `);

    const users = result.data.readUser;

    return users;
}

By the way, the gql tag is just a nice way to integrate with existing editor tools, such as syntax highlighting and type checking. You can remove it if you'd like.

mutation

If we want to execute a mutation, we would do so as follows. Imagine defining a function to create a user:

// component.js

async function createUser(username) {
    const result = await mutation(gql`
        mutation ($username: String!) {
            createUser(input: {
                username: $username
            }) {
                id
            }
        }
    `, {
        username
    });

    const user = result.data.createUser;

    return user;
}

Changing query and mutation canister function names

The queryFunctionName and mutationFunctionName properties of the options object that we pass into the sudograph function allow us to specify the names of the canister functions that are exposed by our graphql canister. By default the generated query and mutation function names are graphql_query and graphql_mutation. Sudograph Client will assume those names should be used unless queryFunctionName and mutationFunctionName are supplied by the developer.

Authentication

The identity property of the options object that we pass into the sudograph function helps us out with authentication, and its type is defined by @dfinity/agent. If we pass in an identity object, it will be passed into the constructor of the @dfinity/agent HttpAgent that Sudograph Client is creating for you under the hood. This identity will be used to sign query and mutation requests, allowing you to implement authorization inside of your graphql canister.

The files example shows how to use Internet Identity with a graphql canister.

agent-js

If you don't wish to use Sudograph Client, you can reach for the lower-level agent-js library.

Installation

Install agent-js into your frontend project with npm install @dfinity/agent.

Use

In addition to the code on this page, the Sudograph Client implementation is a very good example of how to use agent-js directly to interact with a graphql canister.

For our example, let's imagine we have some sort of frontend UI component defined in a JavaScript file called component.js. You could import and prepare agent-js for use as follows:

// component.js

import {
    Actor,
    HttpAgent
} from '@dfinity/agent';

const idlFactory = ({ IDL }) => {
    return IDL.Service({
        graphql_query: IDL.Func([IDL.Text, IDL.Text], [IDL.Text], ['query']),
        graphql_mutation: IDL.Func([IDL.Text, IDL.Text], [IDL.Text], [])
    });
};

const agent = new HttpAgent();

const actor = Actor.createActor(idlFactory, {
    agent,
    canisterId: 'ryjl3-tyaaa-aaaaa-aaaba-cai'
});

Above we manually construct an IDL Factory describing the graphql_query and graphql_mutation functions exported from our canister. We then create an agent and use that agent with the canister id of our graphql canister to create an actor.

query

If we want to execute a query, we would do so as follows. Imagine defining a function to return all user ids:

// component.js

async function getUserIds() {
    const result = await actor.graphql_query(`
        query {
            readUser {
                id
            }
        }
    `, JSON.stringify({}));

    const resultJSON = JSON.parse(result);

    const users = resultJSON.data.readUser;

    return users;
}

mutation

If we want to execute a mutation, we would do so as follows. Imagine defining a function to create a user:

// component.js

async function createUser(username) {
    const result = await actor.graphql_mutation(`
        mutation ($username: String!) {
            createUser(input: {
                username: $username
            }) {
                id
            }
        }
    `, JSON.stringify({
        username
    }));

    const resultJSON = JSON.parse(result);

    const user = resultJSON.data.createUser;

    return user;
}

Authentication

The HttpAgent from @dfinity/agent takes an object as a parameter to its contructor. That object has a property called identity of type Identity which can be found in @dfinity/agent. This identity will be used to sign requests made by the actor object that we create, allowing you to implement authorization inside of your graphql canister.

The files example shows how to use Internet Identity with a graphql canister.

Schema

The schema is where you define all of the data types of your application, including relations between types. It is also where you will eventually define many other settings, possibly including authentication, authorization, subnet, and Sudograph-specific settings.

An example schema might look like this:

type User {
    id: ID!
    username: String!
    blogPosts: [BlogPost!]! @relation(name: "User:blogPosts::BlogPost:author")
}

type BlogPost {
    id: ID!
    publishedAt: Date
    title: String!
    author: User! @relation(name: "User:blogPosts::BlogPost:author")
}

We have told Sudograph that we have two object types, User and BlogPost. We've described the fields of each type, using some included scalar types such as ID, Date, and String. We have also described one relation between our two types, a one-to-many relationship from User to BlogPost on the fields User:blogPosts and BlogPost:author.

The schema is an incredibly powerful yet simple tool for defining the complex data types of your application. Get to know the possibilities of your schema:

Scalars

Scalar types are not divisible, they have no fields of their own. The scalar types automatically available to you in a Sudograph schema are:

Blob

A Blob value maps to a Rust Vec<u8>.

type File {
    id: ID!
    contents: Blob!
}

Query or mutation inputs of type Blob should be strings or arrays of numbers that can be converted into Rust u8 numbers. Blob types in selection sets are always returned as JSON arrays of numbers.

An example in JavaScript of inputting a string for a Blob:

async function createSmallFile() {
    const result = await mutation(gql`
        mutation ($contents: Blob!) {
            createFile(input: {
                contents: $contents
            }) {
                contents
            }
        }
    `, {
        contents: 'hello'
    });

    const file = result.data.createFile;

    console.log(file);
}

The logged contents of the file would be this: [104, 101, 108, 108, 111].

You can convert the array of numbers back to a string like so:

[104, 101, 108, 108, 111].map(x => String.fromCharCode(x)).join('')

An example in JavaScript of inputting an array of numbers for a Blob:

async function createSmallFile() {
    const result = await mutation(gql`
        mutation ($contents: Blob!) {
            createFile(input: {
                contents: $contents
            }) {
                contents
            }
        }
    `, {
        contents: 'hello'.split('').map(x => x.charCodeAt())
    });

    const file = result.data.createFile;

    console.log(file);
}

The logged contents of the file would be this: [104, 101, 108, 108, 111].

You can convert the array of numbers back to a string like so:

[104, 101, 108, 108, 111].map(x => String.fromCharCode(x)).join('')

Blob types in selection sets can use offset and limit to grab specific bytes:

async function createSmallFile() {
    const result = await mutation(gql`
        mutation ($contents: Blob!) {
            createFile(input: {
                contents: $contents
            }) {
                contents(offset: 1, limit: 3)
            }
        }
    `, {
        contents: 'hello'
    });

    const file = result.data.createFile;

    console.log(file);
}

The logged contents of the file would be this: [101, 108, 108].

You can convert the array of numbers back to a string like so:

[101, 108, 108].map(x => String.fromCharCode(x)).join('')

Boolean

A Boolean value maps to a Rust bool.

type User {
    id: ID!
    verified: Boolean!
}

Date

A Date value maps to a Rust String for storage and a chrono::DateTime for filtering.

type User {
    id: ID!
    dateOfBirth: Date!
}

Query or mutation inputs of type Date should be strings that can be parsed by chrono::DateTime. For example, in JavaScript new Date().toISOString() would be an acceptable format.

An example in JavaScript:

async function getUsersInInterval() {
    const result = await query(gql`
        query ($startDate: Date!, $endDate: Date!) {
            readUser(search: {
                dateOfBirth: {
                    gte: $startDate
                    lt: $endDate
                }
            }) {
                id
            }
        }
    `, {
        startDate: new Date('2021-07-01').toISOString(),
        endDate: new Date('2021-07-02').toISOString()
    });

    const users = result.data.readUser;

    return users;
}

Float

A Float value maps to a Rust f32.

type User {
    id: ID!
    height: Float!
}

ID

An ID value maps to a Rust String. All Sudograph object types must have a field called id of type ID.

type User {
    id: ID!
}

Int

An Int value maps to a Rust i32.

type User {
    id: ID!
    age: Int!
}

JSON

A JSON value maps to a Rust String.

type User {
    id: ID!
    meta: JSON!
}

Query or mutation inputs of type JSON should be any valid JSON value. JSON types in selection sets are always returned as JSON values.

String

A String value maps to a Rust String.

type User {
    id: ID!
    username: String!
}

Enums

Enums are essentially a scalar type that can be one value out of a predetermined set of values that are defined statically in a schema. Enums are represented as strings in the database and selection sets.

Here's a simple example of a Color enum:

type User {
    id: ID!
    favoriteColor: Color!
}

enum Color {
    WHITE
    BLUE
    GOLD
    SILVER
    YELLOW
    PURPLE
}

Objects

Object types have fields that may be types such as other object types or scalars or enums. Object types allow you to define the truly custom data types and relations that make up your application.

You could model a user with blog posts like so:

type User {
    id: ID!
    username: String!
    blogPosts: [BlogPost!]! @relation(name: "User:blogPosts::BlogPost:author")
}

type BlogPost {
    id: ID!
    publishedAt: Date
    title: String!
    author: User! @relation(name: "User:blogPosts::BlogPost:author")
}

You could model a family tree like so:

# TODO this example will not work yet
# TODO the self-referencing has some issues and multiple @relation directives per field is not yet supported
type Person {
    id: ID!
    firstName: String!
    lastName: String!
    father: Person @relation(name: "Person:father::Person:children")
    mother: Person @relation(name: "Person:mother::Person:children")
    children: [Person!]!
        @relation(name: "Person:father::Person:children")
        @relation(name: "Person:mother::Person:children")
}

You could model Ethereum block data like so:

type Block {
    id: ID!
    number: Int!
    hash: String!
    parent: Block
    transactionsRoot: String!
    transactionCount: Int!
    stateRoot: String!
    gasLimit: String!
    gasUsed: String!
    timestamp: Date!
    transactions: [Transaction!]! @relation(name: "Block:transactions::Transaction:block")
}

type Transaction {
    id: ID!
    hash: String!
    index: Int!
    from: String!
    to: String!
    value: String!
    gasPrice: String!
    gas: String!
    inputData: String!
    block: Block! @relation(name: "Block:transactions::Transaction:block")
    gasUsed: String!
}

Relations

Relations allow you to describe the relationships between object types and their fields. Sudograph has a variety of relation capabilities.

Please note that the name argument of the @relation directive is just an arbitrary string, there is no DSL required. The only requirement is that the name argument be the same on both sides of the relation.

Also note that you can only have one @relation directive per field for now.

One-to-one relations

One-to-one relations allow you to connect one object with another object.

One-sided

If you only care about retrieving relation information from one side of the relation, you don't need a @relation directive:

type Foot {
    id: ID!
    shoe: Shoe
}

type Shoe {
    id: ID!
}

In the above example, you will be able to select the shoe of a foot, like so:

query {
    readFoot(search: {
        id: {
            eq: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
        }
    }) {
        id
        shoe {
            id
        }
    }
}

You will not be able to select the foot of a shoe.

Two-sided

If you care about retrieving relation information from both sides of the relation, add a @relation directive. The name argument of the @relation directive can be arbitrary, but it must be the same on both sides of the relation.

type Foot {
    id: ID!
    shoe: Shoe @relation(name: "Foot:shoe::Shoe:foot")
}

type Shoe {
    id: ID!
    foot: Foot @relation(name: "Foot:shoe::Shoe:foot")
}

One-to-many relations

One-to-many relations allow you to connect one object with multiple other objects.

One-sided

If you only care about retrieving relation information from one side of the relation, you don't need a @relation directive:

type Monkey {
    id: ID!
    name: String!
    bananas: [Banana!]!
}

type Banana {
    id: ID!
    color: String!
    size: Int!
}

In the above example, you will be able to select the bananas of a monkey, like so:

query {
    readMonkey(search: {
        id: {
            eq: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
        }
    }) {
        id
        name
        bananas {
            id
            color
            size
        }
    }
}

You will not be able to select the monkey of a banana.

Two-sided

If you care about retrieving relation information from both sides of the relation, add a @relation directive. The name argument of the @relation directive can be arbitrary, but it must be the same on both sides of the relation.

type Monkey {
    id: ID!
    name: String!
    bananas: [Banana!]! @relation(name: "Monkey:bananas::Banana:monkey")
}

type Banana {
    id: ID!
    color: String!
    size: Int!
    monkey: Monkey @relation(name: "Monkey:bananas::Banana:monkey")
}

Many-to-many relations

Many-to-many relations allow you to connect multiple objects with multiple other objects. Many-to-many relations must have a @relation directive. The name argument of the @relation directive can be arbitrary, but it must be the same on both sides of the relation.

type Author {
    id: ID!
    documents: [Document!]! @relation(name: "Author:documents::Document:authors")
}

type Document {
    id: ID!
    text: String!
    authors: [Author!]! @relation(name: "Author:documents::Document:authors")
}

Custom scalars

Custom scalars (scalars that you define) are not yet supported. You'll have to work with the included scalars:

Custom resolvers

NOTICE: Custom resolvers are temporarily disabled. See this issue. Until certain candid and authorization issues are worked out, you'll want to create custom functions in your Rust or Motoko canisters that provide their own authorization and call into your graphql canister. See here and here for some information and examples.

DISCLAIMER: Custom resolvers have only been minimally tested. Information presented here may not be entirely accurate. If you find issues please get in contact with @lastmjs or open issues on the repository.

Though Sudograph generates many powerful CRUD operations for you, it will not be able to cover every conceivable requirement of your applications. Custom resolvers provide a way for you to create your own functionality that is accessible through the same GraphQL API as Sudograph's generated functionality. There are two main locations a resolver can be written, within the graphql canister or in a separate canister.

Resolvers within the graphql canister

You can see a full example of Rust custom resolvers here.

To write resolvers within your graphql canister, start by augmenting your schema, for example in canisters/graphql/src/schema.graphql:

type Query {
    custom_get(id: ID!): Message
}

type Mutation {
    custom_set(id: ID!, text: String): Boolean!
}

type Message {
    id: ID!
    text: String!
}

We've added one custom query and one custom mutation to the schema. Next we need to implement the resolvers in code.

To implement a resolver, we add an asynchronous function to the Rust file that contains our graphql_database macro invocation. The function should have the same name as the query or mutation in the schema, and should use parameter and return types that match the types in the schema. The return type should be a Result with the Ok variant matching the return type in the schema, and you should use sudograph::async_graphql::Error as the Err variant. Object types generated from your schema are automatically in scope in Rust, because they are generated by the graphql_database macro.

Type conversions between GraphQL and Rust can be found here.

Now we'll implement the custom resolvers for the query and mutation in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

type PrimaryKey = String;
type MessageStore = HashMap<PrimaryKey, Option<Message>>;

async fn custom_get(id: ID) -> Result<Option<Message>, sudograph::async_graphql::Error> {
    let message_store = sudograph::ic_cdk::storage::get::<MessageStore>();

    let message_option = message_store.get(&id.to_string());

    match message_option {
        Some(message) => {
            return Ok(message.clone());
        },
        None => {
            return Ok(None);
        }
    };
}

async fn custom_set(id: ID, text: Option<String>) -> Result<bool, sudograph::async_graphql::Error> {
    let message_store = sudograph::ic_cdk::storage::get_mut::<MessageStore>();

    let message = match text {
        Some(text_value) => Some(Message {
            id: id.clone(),
            text: text_value
        }),
        None => None
    };

    message_store.insert(id.to_string(), message);

    return Ok(true);
}
}

Resolvers within a different canister

You can also write resolvers that are deployed to other canisters, using any language supported by the Internet Computer. For now you'll most likely be using Rust or Motoko, so examples are included below.

The process is similar to what you've just seen above, but in your GraphQL schema the custom queries and mutations have the addition of a @canister directive with the canister id of the canister that implements your resolver function.

Rust

In a Rust canister, start by augmenting your schema, for example in canisters/graphql/src/schema.graphql:

type Query {
    custom_get(id: ID!): Message @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Mutation {
    custom_set(id: ID!, text: String): Boolean! @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Message {
    id: ID!
    text: String!
}

Notice we've added @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai") to the custom query and mutation.

Now we need to implement the Rust canister. Let's imagine we've created another Rust canister in canisters/another-rust-canister. We might have a file called canisters/another-rust-canister/src/lib.rs, and it would look like this:


#![allow(unused)]
fn main() {
use sudograph;

// TODO This hasn't been tested, might need some derive macros
struct ID(String);

impl ID {
    fn to_string(&self) -> String {
        return String::from(&self.0);
    }
}

// TODO This hasn't been tested, might need some derive macros
struct Message {
    id: String,
    text: String
};

type PrimaryKey = String;
type MessageStore = HashMap<PrimaryKey, Option<Message>>;

#[sudograph::ic_cdk_macros::query]
async fn custom_get(id: ID) -> Option<Message> {
    let message_store = sudograph::ic_cdk::storage::get::<MessageStore>();

    let message_option = message_store.get(&id.to_string());

    match message_option {
        Some(message) => {
            return message.clone();
        },
        None => {
            return None;
        }
    };
}

#[sudograph::ic_cdk_macros::update]
async fn custom_set(id: ID, text: Option<String>) -> bool {
    let message_store = sudograph::ic_cdk::storage::get_mut::<MessageStore>();

    let message = match text {
        Some(text_value) => Some(Message {
            id: id.clone(),
            text: text_value
        }),
        None => None
    };

    message_store.insert(id.to_string(), message);

    return true;
}
}

Notice that these functions do not return a Result, they directly return the Rust types that correspond to the GraphQL types. This may change in the future as returning a Result may end up being more appropriate.

Also notice that we had to implement the ID and Message types ourselves. We do not have all of the generated types available because we are not using the graphql_database macro in this canister. In the future Sudograph may provide a simple way to generate these types for you without generating the entire database, but for now you'll have to implement them yourself or figure out an appropriate way to induce proper serialization and deserialization. For example, Candid might serialize and deserialize ID to and from strings for us...you'll just have to figure this out on your own for now.

Motoko

You can see a full example of Motoko custom resolvers here.

In a Motoko canister, start by augmenting your schema, for example in canisters/graphql/src/schema.graphql:

type Query {
    customGet(id: ID!): Message @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Mutation {
    customSet(id: ID!, text: String): Boolean! @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Message {
    id: ID!
    text: String!
}

Notice we've added @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai") to the custom query and mutation.

Now we need to implement the Motoko canister. Let's imagine we've created a Motoko canister in canisters/motoko. We might have a file called canisters/motoko/main.mo, and it would look like this:

import Text "mo:base/Text";
import Map "mo:base/HashMap";
import Option "mo:base/Option";

actor Motoko {
    let message_store = Map.HashMap<Text, ?Message>(10, Text.equal, Text.hash);

    type Message = {
        id: Text;
        text: Text;
    };

    public query func customGet(id: Text): async ?Message {
        return Option.flatten(message_store.get(id));
    };

    public func customSet(id: Text, text: ?Text): async Bool {
        let message: ?Message = switch (text) {
            case null null;
            case (?text_value) Option.make({
                id;
                text = text_value;
            });
        };
        
        message_store.put(id, message);

        return true;
    };
}

Implementing the Motoko resolvers is very similar to implementing the Rust resolvers, the biggest difference besides the lanuage itself being the type conversions. We've implemented the Message type, and we've excluded the ID type and just used the native Motoko Text type. Again, you might have to experiment with the serialization and deserialization of values between canisters, a lot of it has to do with Candid.

Other languages

Other languages are somewhat possible to use now (C, C++, AssemblyScript), and many more will come in the future as WebAssembly matures. Writing resolvers in each of these languages will be similar to writing them in Rust or Motoko. Once your schema is setup and correctly pointing to a canister, you simply implement the resolver in the language of choice and ensure that the types align correctly.

Type conversions

GraphQL -> Rust

Object, ID, and Date types must be created in Rust canisters if the graphql_database macro is not invoked. ID and Date types might work as String in Rust.

  • Blob -> Vec<u8>
  • Boolean -> bool
  • Date -> Date
  • Float -> f32
  • ID -> ID
  • Int -> i32
  • JSON -> serde_json::Value
  • String -> String

Creating a custom ID type:


#![allow(unused)]
fn main() {
// TODO This hasn't been tested, might need some derive macros
struct ID(String);

impl ID {
    fn to_string(&self) -> String {
        return String::from(&self.0);
    }
}
}

Creating a custom Date type:


#![allow(unused)]
fn main() {
// TODO This hasn't been tested, might need some derive macros
struct Date(String);

impl Date {
    fn to_string(&self) -> String {
        return String::from(&self.0);
    }
}
}

GraphQL -> Motoko

Object types must be manually created in Motoko.

  • Blob -> Blob
  • Boolean -> Bool
  • Date -> Text
  • Float -> Float
  • ID -> Text
  • Int -> Int32
  • JSON -> Text (it's unclear if this will work)
  • String -> Text

Custom directives

Custom directives (directives that you define) are not yet supported. You'll have to work with the Sudograph directives.

Sudograph directives

Sudograph provides a number of directives for use within your GraphQL schema. Directives can be applied to object types or fields within your schema. The following are available for use:

@relation

  • name: relation
  • arguments: name
  • application: field
  • description: Indicates a two-sided relationship, where both sides of the relationship need to be updated during relation mutations. The name argument is an arbitrary string, but must be the same on both fields representing each side of the relationship.
type Foot {
    id: ID!
    shoe: Shoe @relation(name: "Foot:shoe::Shoe:foot")
}

type Shoe {
    id: ID!
    foot: Foot @relation(name: "Foot:shoe::Shoe:foot")
}

@canister

  • name: canister
  • arguments: id
  • application: field
  • description: Indicates the canister with the implementation of the resolver function. The id argument is used to do a cross-canister function call under-the-hood.
type Query {
    customGet(id: ID!): Message @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Mutation {
    customSet(id: ID!, text: String): Boolean! @canister(id: "ryjl3-tyaaa-aaaaa-aaaba-cai")
}

type Message {
    id: ID!
    text: String!
}

Possible future relations

Just let your imagination run wild with what some of these could do:

  • @ignore
  • @auth
  • @token
  • @subnet

Sudograph settings

There will be many settings that Sudograph will allow the developer to customize. Sudograph settings are set in your GraphQL schema using the SudographSettings object type. The following are supported now:

type SudographSettings {
    exportGeneratedQueryFunction: true
    exportGeneratedMutationFunction: true
    exportGeneratedInitFunction: true
    exportGeneratedPostUpgradeFunction: true
}

exportGeneratedQueryFunction

Defaults to true. If set to false, the graphql_query function generated by Sudograph will not be exported as a publicly available canister function. This would allow you to implement your own logic before executing a query, for example as part of an authorization flow.

Here's an example of overriding the generated graphql_query function with some basic authorization. You would create the following GraphQL schema in canisters/graphql/src/schema.graphql:

type SudographSettings {
    exportGeneratedQueryFunction: false
}

type User {
    id: ID!
}

You would write the following in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::query]
async fn graphql_query_custom(query_string: String, variables_json_string: String) -> String {
    let authorized_principal = sudograph::ic_cdk::export::Principal::from_text("y6lgw-chi3g-2ok7i-75s5h-k34kj-ybcke-oq4nb-u4i7z-vclk4-hcpxa-hqe").expect("should be able to decode");
    
    if sudograph::ic_cdk::caller() != authorized_principal {
        panic!("Not authorized");
    }

    return graphql_query(query_string, variables_json_string).await;
}
}

You would update canisters/graphql/src/graphql.did:

service : {
    "graphql_query_custom": (text, text) -> (text) query;
    "graphql_mutation": (text, text) -> (text);
}

exportGeneratedMutationFunction

Defaults to true. If set to false, the graphql_mutation function generated by Sudograph will not be exported as a publicly available canister function. This would allow you to implement your own logic before executing a mutation, for example as part of an authorization flow.

Here's an example of overriding the generated graphql_mutation function with some basic authorization. You would create the following GraphQL schema in canisters/graphql/src/schema.graphql:

type SudographSettings {
    exportGeneratedMutationFunction: false
}

type User {
    id: ID!
}

You would write the following in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::update]
async fn graphql_mutation_custom(mutation_string: String, variables_json_string: String) -> String {
    let authorized_principal = sudograph::ic_cdk::export::Principal::from_text("y6lgw-chi3g-2ok7i-75s5h-k34kj-ybcke-oq4nb-u4i7z-vclk4-hcpxa-hqe").expect("should be able to decode");
    
    if sudograph::ic_cdk::caller() != authorized_principal {
        panic!("Not authorized");
    }

    return graphql_mutation(mutation_string, variables_json_string).await;
}
}

You would update canisters/graphql/src/graphql.did:

service : {
    "graphql_query": (text, text) -> (text) query;
    "graphql_mutation_custom": (text, text) -> (text);
}

exportGeneratedInitFunction

Defaults to true. If set to false, the init function generated by Sudograph will not be exported as a publicly available canister function. This would allow you to implement your own logic during canister initialization. You'll want to make sure to call the generated init function after your functionality is complete, as it executes all of the init mutations that initialize the database.

Here's an example of overriding the generated init function. You would create the following GraphQL schema in canisters/graphql/src/schema.graphql:

type SudographSettings {
    exportGeneratedInitFunction: false
}

type User {
    id: ID!
}

You would write the following in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::init]
async fn init_custom() {
    init.await;
}
}

exportGeneratedPostUpgradeFunction

Defaults to true. If set to false, the post_upgrade function generated by Sudograph will not be exported as a publicly available canister function. This would allow you to implement your own logic during canister post upgrade. You'll want to make sure to call the generated post_upgrade function after your functionality is complete, as it executes all of the init mutations that initialize the database (unless you are keeping your state through stable memory, then you would not want to initialize the database again).

Here's an example of overriding the generated post_upgrade function. You would create the following GraphQL schema in canisters/graphql/src/schema.graphql:

type SudographSettings {
    exportGeneratedPostUpgradeFunction: false
}

type User {
    id: ID!
}

You would write the following in canisters/graphql/src/graphql.rs:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::post_upgrade]
async fn post_upgrade_custom() {
    post_upgrade.await;
}
}

Generated Schema

Sudograph takes your schema and generates a much more powerful schema along with the resolvers for that schema.

In addition to this documentation, assuming you've generated an example project with npx sudograph and deployed your canisters, then navigate to the playground at http://r7inp-6aaaa-aaaaa-aaabq-cai.localhost:8000 in a Chromium browser and click the Docs button in the top right corner. That documentation explains everything that you can do with your newly generated schema.

As an example, given the following simple schema:

type User {
    id: ID!
}

type BlogPost {
    id: ID!
}

Sudograph will generate the following schema along with its resolvers:

type Query {
  readUser(
    search: ReadUserInput,
    limit: Int,
    offset: Int,
    order: OrderUserInput
  ): [User!]!
	
  readBlogPost(
    search: ReadBlogPostInput,
    limit: Int,
    offset: Int,
    order: OrderBlogPostInput
  ): [BlogPost!]!
}

input DeleteUserInput {
	id: ID
	ids: [ID!]
}

input UpdateBlogPostInput {
	id: ID!
}

input DeleteBlogPostInput {
	id: ID
	ids: [ID!]
}

input ReadUserInput {
	id: ReadIDInput
	and: [ReadUserInput!]
	or: [ReadUserInput!]
}

input ReadIDInput {
	eq: ID
	gt: ID
	gte: ID
	lt: ID
	lte: ID
	contains: ID
}

input OrderUserInput {
	id: OrderDirection
}

enum OrderDirection {
	ASC
	DESC
}

type User {
	id: ID!
}

input ReadBlogPostInput {
	id: ReadIDInput
	and: [ReadBlogPostInput!]
	or: [ReadBlogPostInput!]
}

input OrderBlogPostInput {
	id: OrderDirection
}

type BlogPost {
	id: ID!
}

type Mutation {
	createUser(input: CreateUserInput): [User!]!
	createBlogPost(input: CreateBlogPostInput): [BlogPost!]!
	updateUser(input: UpdateUserInput!): [User!]!
	updateBlogPost(input: UpdateBlogPostInput!): [BlogPost!]!
	deleteUser(input: DeleteUserInput!): [User!]!
	deleteBlogPost(input: DeleteBlogPostInput!): [BlogPost!]!
	initUser: Boolean!
	initBlogPost: Boolean!
}

input UpdateUserInput {
	id: ID!
}

input CreateBlogPostInput {
	id: ID
}

input CreateUserInput {
	id: ID
}

Query

Sudograph will generate the equivalent of the Query object type based on your GraphQL schema. If you have specified your own Query object type, the two object types will be combined into the final Query object type.

The fields in the Query object type generated by Sudograph are:

read

The read query is the main way to read data from your GraphQL database.

Per object type defined in your GraphQL schema, Sudograph generates one read field on the Query object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Query {
    readUser(
        search: ReadUserInput,
        limit: Int
        offset: Int
        order: OrderUserInput
    ): [User!]!
}

input ReadUserInput {
	id: ReadIDInput
	and: [ReadUserInput!]
	or: [ReadUserInput!]
}

input OrderUserInput {
	id: OrderDirection
}

enum OrderDirection {
	ASC
	DESC
}

Each read query has the ability to search, limit, offset, and order. Each read query returns an array of its corresponding object types.

It's important to remember that within read selection sets you also have the ability to search, limit, offset, and order on any many-relation.

For example if you had this schema:

type User {
    id: ID!
    blogPosts: [BlogPost!]!
}

type BlogPost {
    id: ID!
    title: String!
}

You could write a query like this:

query {
    readUser {
        id
        blogPosts(
            search: {
                title: {
                    contains: "The"
                }
            }
            offset: 0
            limit: 10
            order: {
                title: ASC
            }
        ) {
            id
            title
        }
    }
}

Mutation

Sudograph will generate the equivalent of the Mutation object type based on your GraphQL schema. If you have specified your own Mutation object type, the two object types will be combined into the final Mutation object type.

The fields in the Mutation object type generated by Sudograph are:

create

The create mutation is the main way to create data in your GraphQL database.

Per object type defined in your GraphQL schema, Sudograph generates one create field on the Mutation object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Mutation {
	createUser(input: CreateUserInput): [User!]!
}

input CreateUserInput {
	id: ID
}

It's important to remember that within create selection sets you also have the ability to search, limit, offset, and order on any many-relation.

For example if you had this schema:

type User {
    id: ID!
    blogPosts: [BlogPost!]!
}

type BlogPost {
    id: ID!
    title: String!
}

You could write a query like this:

mutation {
    createUser(blogPosts: {
        connect: ["7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"]
    }) {
        id
        blogPosts(
            search: {
                title: {
                    contains: "The"
                }
            }
            offset: 0
            limit: 10
            order: {
                title: ASC
            }
        ) {
            id
            title
        }
    }
}

update

The update mutation is the main way to update data in your GraphQL database.

Per object type defined in your GraphQL schema, Sudograph generates one update field on the Mutation object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Mutation {
	updateUser(input: UpdateUserInput!): [User!]!
}

input UpdateUserInput {
	id: ID!
}

It's important to remember that within update selection sets you also have the ability to search, limit, offset, and order on any many-relation.

For example if you had this schema:

type User {
    id: ID!
    blogPosts: [BlogPost!]!
}

type BlogPost {
    id: ID!
    title: String!
}

You could write a query like this:

mutation {
    updateUser(
        id: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
        blogPosts: {
            connect: ["2c3nrr-4jhf3-2gozt-hj37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"]
        }
    ) {
        id
        blogPosts(
            search: {
                title: {
                    contains: "The"
                }
            }
            offset: 0
            limit: 10
            order: {
                title: ASC
            }
        ) {
            id
            title
        }
    }
}

delete

The delete mutation is the main way to delete data in your GraphQL database.

Per object type defined in your GraphQL schema, Sudograph generates one delete field on the Mutation object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Mutation {
	deleteUser(input: DeleteUserInput!): [User!]!
}

input DeleteUserInput {
	id: ID
	ids: [ID!]
}

It's important to remember that within delete selection sets you also have the ability to search, limit, offset, and order on any many-relation.

For example if you had this schema:

type User {
    id: ID!
    blogPosts: [BlogPost!]!
}

type BlogPost {
    id: ID!
    title: String!
}

You could write a query like this:

mutation {
    deleteUser(input: {
        id: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
    }) {
        id
        blogPosts(
            search: {
                title: {
                    contains: "The"
                }
            }
            offset: 0
            limit: 10
            order: {
                title: ASC
            }
        ) {
            id
            title
        }
    }
}

init

The init mutation initializes the underlying Rust data structures in your GraphQL database. This mutation must be run before other queries or mutations can be executed for an object type. Sudograph will automatically run all init mutations for all of your object types in the graphql canister's init and post_upgrade functions, unless you override them.

Per object type defined in your GraphQL schema, Sudograph generates one init field on the Mutation object type. We'll focus in on what happens with one object type defined. Imagine your schema looks like this:

type User {
    id: ID!
}

Sudograph will generate the following (we're focusing on just one part of the generated schema):

type Mutation {
	initUser: Boolean!
}

Subscription

Subscriptions are not currently supported by Sudograph.

Because the Internet Computer itself does not have any push mechanisms exposed, it will be difficult to provide subscription capabilities in the normal ways e.g. web sockets.

For now you will have to implement your own polling solutions to know when data has been updated.

Search

The search input allows for flexible querying of records. You can query by scalars and relations to arbitrary depths (assuming performance allows). You can also use arbitrary combinations of and and or in your searches.

You can search by scalar fields using the inputs generated for each scalar type.

Blob

Generated input:

input ReadBlobInput {
	eq: Blob
	contains: Blob
	startsWith: Blob
	endsWith: Blob
}

Examples:

query {
    readFile(search: {
        contents: {
            eq: [101, 108, 108]
        }
    }) {
        id
        contents
    }
}

query {
    readFile(search: {
        contents: {
            contains: [108, 108]
        }
    }) {
        id
        contents
    }
}

query {
    readFile(search: {
        contents: {
            startsWith: [101]
        }
    }) {
        id
        contents
    }
}

query {
    readFile(search: {
        contents: {
            endsWith: [108]
        }
    }) {
        id
        contents
    }
}

Boolean

Generated input:

input ReadBooleanInput {
	eq: Boolean
}

Examples:

query {
    readUser(search: {
        living: {
            eq: true
        }
    }) {
        id
        living
    }
}

Date

Generated input:

input ReadDateInput {
	eq: Date
	gt: Date
	gte: Date
	lt: Date
	lte: Date
}

Examples:

query {
    readBlogPost(search: {
        createdAt: {
            eq: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

query {
    readBlogPost(search: {
        createdAt: {
            gt: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

query {
    readBlogPost(search: {
        createdAt: {
            gte: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

query {
    readBlogPost(search: {
        createdAt: {
            lt: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

query {
    readBlogPost(search: {
        createdAt: {
            lte: "2021-07-02T22:45:44.001Z"
        }
    }) {
        id
        title
    }
}

Float

Generated input:

input ReadFloatInput {
	eq: Float
	gt: Float
	gte: Float
	lt: Float
	lte: Float
}

Examples:

query {
    readUser(search: {
        height: {
            eq: 5.8
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        height: {
            gt: 5.8
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        height: {
            gte: 5.8
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        height: {
            lt: 5.8
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        height: {
            lte: 5.8
        }
    }) {
        id
    }
}

ID

Generated input:

input ReadIDInput {
	eq: ID
	gt: ID
	gte: ID
	lt: ID
	lte: ID
	contains: ID
}

Examples:

query {
    readUser(search: {
        id: {
            eq: "7c3nrr-6jhf3-2gozt-hh37a-d6nvf-lsdwv-d7bhp-uk5nt-r42y"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            gt: "1"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            gte: "1"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            lt: "100"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            lte: "100"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        id: {
            contains: "7c3nrr"
        }
    }) {
        id
    }
}

Int

Generated input:

input ReadIntInput {
	eq: Int
	gt: Int
	gte: Int
	lt: Int
	lte: Int
}

Examples:

query {
    readUser(search: {
        age: {
            eq: 25
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        age: {
            gt: 20
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        age: {
            gte: 30
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        age: {
            lt: 45
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        age: {
            lte: 70
        }
    }) {
        id
    }
}

JSON

Generated input:

input ReadJSONInput {
	eq: String
	gt: String
	gte: String
	lt: String
	lte: String
	contains: String
}

Examples:

query {
    readUser(search: {
        meta: {
            eq: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            gt: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            gte: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            lt: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            lte: "{ \"zone\": 5 }"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        meta: {
            contains: "zone"
        }
    }) {
        id
    }
}

String

Generated input:

input ReadStringInput {
    eq: String
	gt: String
	gte: String
	lt: String
	lte: String
	contains: String
}

Examples:

query {
    readUser(search: {
        username: {
            eq: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            gt: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            gte: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            lt: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            lte: "lastmjs"
        }
    }) {
        id
    }
}

query {
    readUser(search: {
        username: {
            contains: "mjs"
        }
    }) {
        id
    }
}

and

The search input for each object type, in addition to all scalar and relation fields, contains an and field. If you want to and together multiple searches of the same field, there are two ways to do so:

query {
    readUser(search: {
        age: {
            gte: 5
            lte: 10
        }
    }) {
        id
        age
    }
}

This can also be achieved like so:

query {
    readUser(search: {
        and: [
            {
                age: {
                    gte: 5
                }
            },
            {
                age: {
                    lte: 10
                }
            }
        ]
    }) {
        id
        age
    }
}

or

The search input for each object type, in addition to all scalar and relation fields, contains an or field. If you want to or together multiple searches of the same field, you can do so:

query {
    readUser(search: {
        or: [
            {
                age: {
                    eq: 5
                }
            },
            {
                age: {
                    eq: 6
                }
            }
        ]
    }) {
        id
        age
    }
}

You can search by relation fields using the search inputs generated for each object type.

Imagine the following schema:

type User {
    id: ID!
    username: String!
    blogPosts: [BlogPost!]! @relation(name: "User:blogPosts::BlogPost:author")
}

type BlogPost {
    id: ID!
    publishedAt: Date
    title: String!
    author: User! @relation(name: "User:blogPosts::BlogPost:author")
}

The search inputs generated for each object type would be:

input ReadUserInput {
	id: ReadIDInput
	username: ReadStringInput
	blogPosts: ReadBlogPostInput
	and: [ReadUserInput!]
	or: [ReadUserInput!]
}

input ReadBlogPostInput {
	id: ReadIDInput
	publishedAt: ReadDateInput
	title: ReadStringInput
	author: ReadUserInput
	and: [ReadBlogPostInput!]
	or: [ReadBlogPostInput!]
}

You can search across relations like so:

query {
    readUser(search: {
        blogPosts: {
            title: {
                contains: "The"
            }
        }
    }) {
        id
        username
        blogPosts {
            id
            title
        }
    }
}

Limit

The limit input argument is an Int that allows you to specify how many records to return for a selection. For example, a limit of 0 would always return 0 records, and a limit of 10 would return no more than 10 records.

If the limit specified is greater than the number of records available based on the query inputs, then the total number of records available will be returned.

Combining limit with offset allows for flexible paging capabilities. A good example of paging can be found in the frontend of the files example.

Assuming there are 10 User records in the database:

query {
    readUser(limit: 10) {
        id
    }
}

# The readUser property in the selection set would be:
# [{ id: 0 }, { id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }, { id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }]
query {
    readUser(limit: 5) {
        id
    }
}

# The readUser property in the selection set would be:
# [{ id: 0 }, { id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }]
query {
    readUser(limit: 0) {
        id
    }
}

# The readUser property in the selection set would be:
# []

It's important to remember that within any selection sets you have the ability to limit on any many-relation:

query {
    readUser {
        id
        blogPosts(limit: 5) {
            title
        }
    }
}

mutation {
    createUser(input: {
        username: "lastmjs"
    }) {
        id
        blogPosts(limit: 5) {
            title
        }
    }
}

mutation {
    updateUser(input: {
        id: "0"
        username: "lastmjs"
    }) {
        id
        blogPosts(limit: 5) {
            title
        }
    }
}

mutation {
    deleteUser(input: {
        id: "0"
    }) {
        id
        blogPosts(limit: 5) {
            title
        }
    }
}

Offset

The offset input argument is an Int that allows you to specify the starting index in the selection of records. For example, imagine there are 10 User records in the database. An offset of 0 would return all 10 records starting at index 0 which is the first record (assuming they are ordered already in the database):

query {
    readUser(offset: 0) {
        id
    }
}

# The readUser property in the selection set would be:
# [{ id: 0 }, { id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }, { id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }]

An offset of 1 would return 9 records starting at index 1 which is the second record:

query {
    readUser(offset: 1) {
        id
    }
}

# The readUser property in the selection set would be:
# [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }, { id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }]

If the offset specified is greater than or equal to the number of records available based on the query inputs, Sudograph will panic causing the call to trap. Essentially at this point the offset has gone beyond the end of the selection array. If you disagree with this choice let me know @lastmjs or open an issue in the repository.

Combining offset with limit allows for flexible paging capabilities. A good example of paging can be found in the frontend of the files example.

It's important to remember that within any selection sets you have the ability to offset on any many-relation:

query {
    readUser {
        id
        blogPosts(offset: 5) {
            title
        }
    }
}

mutation {
    createUser(input: {
        username: "lastmjs"
    }) {
        id
        blogPosts(offset: 5) {
            title
        }
    }
}

mutation {
    updateUser(input: {
        id: "0"
        username: "lastmjs"
    }) {
        id
        blogPosts(offset: 5) {
            title
        }
    }
}

mutation {
    deleteUser(input: {
        id: "0"
    }) {
        id
        blogPosts(offset: 5) {
            title
        }
    }
}

Order

The order input allows you to order by any one scalar field of an object type. In the future it may be possible to order by multiple fields. There are two possible orderings, DESC and ASC.

Here are some examples assuming the following schema:

type User {
    id: ID!
    age: Int!
    username: String!
}
query {
    readUser(order: {
        id: DESC
    }) {
        id
    }
}

query {
    readUser(order: {
        age: ASC
    }) {
        id
    }
}

query {
    readUser(order: {
        username: DESC
    }) {
        id
    }
}

It's important to remember that within any selection sets you have the ability to order on any many-relation:

query {
    readUser {
        id
        blogPosts(order: {
            title: DESC
        }) {
            title
        }
    }
}

mutation {
    createUser(input: {
        username: "lastmjs"
    }) {
        id
        blogPosts(order: {
            title: DESC
        }) {
            title
        }
    }
}

mutation {
    updateUser(input: {
        id: "0"
        username: "lastmjs"
    }) {
        id
        blogPosts(order: {
            title: DESC
        }) {
            title
        }
    }
}

mutation {
    deleteUser(input: {
        id: "0"
    }) {
        id
        blogPosts(order: {
            title: DESC
        }) {
            title
        }
    }
}

Authorization

Authorization and authentication are two separate but related concerns. Authentication proves who (which identity) is performing a query or update, and authorization describes what that identity is allowed to do.

Sudograph relies on the Internet Computer's native authentication of clients using public-key cryptography. There are some very nice helper libraries that allow you to easily create identities on the frontend that are able to sign query and update calls to canisters. See the agent-js documentation for more details.

Authorization on the other hand must be handled by your canister in your own custom functions or resolvers. Before allowing a mutation to be executed, or before returning data in a custom resolver, you will want to get the principal of the caller and check that it is allowed to perform the operation.

Here's a very simple example from the Ethereum Archival Canister. First the schema instructs Sudograph not to export the generated mutation function:

type SudographSettings {
    exportGeneratedMutationFunction: false
}

This is important because we do not want any mutations taking place that aren't authorized. The Ethereum Archival Canister is designed to accept mutations only from one identity (the EC2 instance that mirrors blocks from a geth node). We perform the authorization like so:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[update]
async fn graphql_mutation_custom(mutation_string: String, variables_json_string: String) -> String {
    let ec2_principal = ic_cdk::export::Principal::from_text("y6lgw-chi3g-2ok7i-75s5h-k34kj-ybcke-oq4nb-u4i7z-vclk4-hcpxa-hqe").expect("should be able to decode");
    
    if ic_cdk::caller() != ec2_principal {
        panic!("Not authorized");
    }

    return graphql_mutation(mutation_string, variables_json_string).await;
}
}

We have overridden the generated graphql mutation function, graphql_mutation, with our own custom graphql_mutation_custom. We then hard-code the EC2 instance's principal representing its identity. We panic if any other identity attempts to perform an update.

This is a very simple example, but it illustrates how you can create custom functions designed for a specific purpose with authorization, using Sudograph to perform CRUD operations.

The plan is to eventually introduce authorization configuration into the GraphQL schema, allowing you to use a directive like @auth to enforce authorization.

Until you can configure authorization from within the schema itself, it will probably be necessary to control all access to queries and mutations from custom canister functions that enforce their own authorization. Custom resolvers won't really be useful if any data in the schema needs authorized access.

Canister authorization

If you are interested in using a Rust or Motoko canister as a client to your graphql canister, then take a look at the rust-client and motoko-client examples.

The graphql canister can be configured to only authorize queries or updates from a specific canister. This will allow you to create authorized data-specific functions in your Rust or Motoko canisters, and those functions can then use GraphQL to call into the graphql canister. This is probably the best way to implement authorization in your applications until something like the @auth directive is implemented.

Rust authorization


#![allow(unused)]
fn main() {
use ic_cdk;
use ic_cdk_macros;

#[ic_cdk_macros::import(canister = "graphql")]
struct GraphQLCanister;

#[ic_cdk_macros::query]
async fn get_all_users() -> String {
    // TODO here you can implement your custom authorization for get_all_users
    
    let result = GraphQLCanister::graphql_query_custom(
        "
            query {
                readUser {
                    id
                }
            }
        ".to_string(),
        "{}".to_string()
    ).await;

    let result_string = result.0;

    return result_string;
}
}

Motoko authorization

import Text "mo:base/Text";

actor Motoko {
    let GraphQLCanister = actor "rrkah-fqaaa-aaaaa-aaaaq-cai": actor {
        graphql_query_custom: query (Text, Text) -> async (Text);
        graphql_mutation: (Text, Text) -> async (Text);
    };

    public func get_all_users(): async (Text) {
        // TODO here you can implement your custom authorization for get_all_users
        
        let result = await GraphQLCanister.graphql_query_custom("query { readUser { id } }", "{}");

        return result;
    }
}

You can then authorize specific canisters in the graphql canister like this:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::query]
async fn graphql_query_custom(query: String, variables: String) -> String {
    let motoko_canister_principal = sudograph::ic_cdk::export::Principal::from_text("ryjl3-tyaaa-aaaaa-aaaba-cai").expect("should be able to decode");

    if sudograph::ic_cdk::caller() != motoko_canister_principal {
        panic!("Not authorized");
    }

    return graphql_query(query, variables).await;
}
}

graphql_query_custom will only accept calls from the ryjl3-tyaaa-aaaaa-aaaba-cai canister. Now all authorization logic can be implemented in the ryjl3-tyaaa-aaaaa-aaaba-cai canister.

Again, the goal is to allow you to write custom authorization into your schema with something like an @auth directive, which should greatly simplify authorization and allow for GraphQL operations to be made directly from a frontend client.

Migrations

Whenever you wish to make changes to a canister without losing that canister's state, you must perform what is called an upgrade.

An upgrade allows you to preserve your canister's state while changing its code. You can see a full example of an upgrade here.

Simple migrations

If you haven't changed your schema and you just want to preserve state across upgrades:


#![allow(unused)]
fn main() {
use sudograph;

sudograph::graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::pre_upgrade]
fn pre_upgrade_custom() {
    let object_type_store = sudograph::ic_cdk::storage::get::<ObjectTypeStore>();

    sudograph::ic_cdk::storage::stable_save((object_type_store,));
}

#[sudograph::ic_cdk_macros::post_upgrade]
fn post_upgrade_custom() {
    let (stable_object_type_store,): (ObjectTypeStore,) = sudograph::ic_cdk::storage::stable_restore().expect("ObjectTypeStore should be in stable memory");

    let object_type_store = sudograph::ic_cdk::storage::get_mut::<ObjectTypeStore>();

    for (key, value) in stable_object_type_store.into_iter() {
        object_type_store.insert(key, value);
    }
}
}

The upgrade shown above assumes no changes to your GraphQL schema. If you were to change your GraphQL schema and then perform the upgrade, you would run into a number of issues. This is because the underlying data structures that make up your database would be out of sync with your schema. In this case your code would cease to function as intended.

You must perform automatic or manual migrations on your code if you change your schema.

Automatic migrations

Automatic migrations are not currently supported. For now you'll need to manually change the ObjectTypeStore in your post_upgrade function to reflect the changes in your schema, or accept that you will lose all of your state on every deploy (this may be acceptable if you plan on only deploying once).

The plan is to eventually automate migrations as much as possible. With automatic migrations, if you change your schema and wish to update it on a live canister, Sudograph will generate migrations written in Rust to accomplish the migration for you. If a migration cannot be performed automatically, Sudograph will allow you to easily define your own migration code in Rust. That's the rough plan for now.

Manual migrations

Even with automatic migrations, you will run into scenarios that cannot be handled automatically. You may be required to manually update the ObjectTypeStore in the post_upgrade function to fully migrate data after schema changes. Studying the documentation available for the ObjectTypeStore will help you determine what needs to be changed within it when you change your schema.

Let's look at the migrations required when we add a field to an object type in our schema. Here's the original schema:

type User {
    id: ID!
}

Imagine that we have deployed the original schema. Now we will change the schema:

type User {
    id: ID!
    username: String
}

We need to change the ObjectTypeStore so that it is aware of the change. In our post_upgrade function:


#![allow(unused)]
fn main() {
#[sudograph::ic_cdk_macros::post_upgrade]
fn post_upgrade_custom() {
    let (stable_object_type_store,): (ObjectTypeStore,) = sudograph::ic_cdk::storage::stable_restore().expect("ObjectTypeStore should be in stable memory");

    let object_type_store = sudograph::ic_cdk::storage::get_mut::<ObjectTypeStore>();

    for (key, value) in stable_object_type_store.into_iter() {
        object_type_store.insert(key, value);
    }

    // First grab the object type for User
    let user_object_type = object_type_store.get_mut("User").expect("User object type should exist");

    // Then add the type information for the username field
    user_object_type.field_types_store.insert(
        "username".to_string(),
        sudograph::sudodb::FieldType::String
    );

    // Finally add the initial values for the username field
    for field_value_store in user_object_type.field_values_store.values_mut() {
        field_value_store.insert(
            "username".to_string(),
            sudograph::sudodb::FieldValue::Scalar(None)
        );
    }
}
}

After the next deploy we will have successfully migrated our database! Make sure to remove the code on subsequent deploys. Automatic migrations will make this process simpler and more standardized.

Transactions

Sudograph does not have a strong guarantee of atomicity (transactions) at this time. Read on for more information.

Single canister mutations

Within a single update call, transactions are automatically handled by the Internet Computer itself! If there are any errors (technically Wasm traps) all state changes are undone and thus not persisted. This is a very nice feature of single canister development, and it's important to know that the schema that Sudograph generates for you is limited to a single canister by default.

Unfortunately, Sudograph does not currently guarantee that all errors will lead to traps, and thus there is no guarantee that all state changes within a single update call will be undone. Once an automated testing framework is in place, adding this functionality to Sudograph should not be too difficult.

Once Sudograph ensures all errors will lead to traps, you will be able to execute transactions and ensure atomicity by executing many mutations within a single update call like this:

mutation {
    createUser1: createUser(input: {
        username: "user1"
    }) {
        id
    }

    createUser2: createUser(input: {
        username: "user2"
    }) {
        id
    }

    createUser3: createUser(input: {
        username: "user3"
    }) {
        id
    }
}

The mutations above will either all succeed or all fail.

Multi-canister mutations

Even if you batch many mutations into one update call, if any of your mutations are custom and call into other canisters, the atomic guarantees are gone. Providing atomic operations in these situations will be more difficult for Sudograph to implement because the Internet Computer does not provide atomicity when doing multi-canister updates.

If you need transactions across multiple canisters, you will need to write custom code that can undo state changes across all canisters in a chain of mutations.

Multi-canister scaling

Sudograph will not scale a single schema across multiple canisters automatically. The goal is to eventually provide this functionality, but the timeline and feasibility of this goal are unknown.

You can deploy as many Sudograph canisters with a single schema as you'd like, but the generated queries and mutations will only be able to operate on data that has been created within the same canister (unless you write your own glue code to enable cross-canister queries and mutations).

Currently each schema that you deploy into a canister is limited to ~4 GB of data. This should be sufficient for prototyping and small amounts of storage and usage. There are also multiple scaling techniques that could be used to scale out, for example by storing large files (video, audio, images, documents) in a separate set of canisters that has automatic scaling built-in, and storing references to that data in your Sudograph canister.

One of the main problems Sudograph will have scaling across multiple canisters is ensuring efficient and flexible querying. Complex indexing and searching will need to work on relational data across multiple canisters.

Sudograph is focused first on providing an amazing single canister development experience. This should be sufficient for many new developers and young projects. There are multiple promising technologies or solutions that could lift the ~4 GB limit, including memory64, multiple memories, and possibly infinite/unbounded virtual memory in canisters.

I am hopeful that individual canisters will be able to scale into the 10s or 100s or perhaps 1000s of GBs in the near future.

Custom database operations

Sudograph is designed to generate much of the CRUD functionality you might need, but it can't handle every situation. You might find the need to have access to the underlying data structures.

Sudodb

One layer below Sudograph is Sudodb. Sudodb is a very simple relational database that uses the Internet Computer's orthogonal persistence directly. It exposes a few basic functions like create, read, update, and delete. You can use those functions directly in custom resolvers or your own functions. You can dig through the documentation and source code below:

Here's an example of how you would use Sudodb directly:


#![allow(unused)]
fn main() {
use sudograph::graphql_database;

graphql_database!("canisters/graphql/src/schema.graphql");

#[sudograph::ic_cdk_macros::query]
async fn read_all_users() -> Vec<User> {
    let object_type_store = sudograph::ic_cdk::storage::get::<ObjectTypeStore>();

    let mut selection_set_map = HashMap::new();

    selection_set_map.insert(
        String::from("id"),
        sudograph::sudodb::SelectionSetInfo {
            selection_set: sudograph::sudodb::SelectionSet(None),
            search_inputs: vec![],
            limit_option: None,
            offset_option: None,
            order_inputs: vec![]
        }
    );
    
    let selection_set = sudograph::sudodb::SelectionSet(Some(selection_set_map));

    let read_result = sudograph::sudodb::read(
        object_type_store,
        "User",
        &vec![],
        None,
        None,
        &vec![],
        &selection_set
    );

    match read_result {
        Ok(strings) => {
            let deserialized_strings: Vec<User> = strings.iter().map(|string| {
                return sudograph::serde_json::from_str(string).unwrap();
            }).collect();

            return deserialized_strings;
        },
        Err(_) => {
            return vec![];
        }
    };
}
}

ObjectTypeStore

One layer below Sudodb is the ObjectTypeStore. The ObjectTypeStore is the main data structure that makes up the GraphQL database. You can directly read from or update the ObjectTypeStore in custom resolvers or your own functions. You can dig into its structure in the documentation and source code below:

Here's an example of how you would use the ObjectTypeStore directly:


#![allow(unused)]
fn main() {
#[sudograph::ic_cdk_macros::query]
async fn read_all_users() -> Vec<User> {
    let object_type_store = sudograph::ic_cdk::storage::get::<ObjectTypeStore>();

    let object_type = object_type_store.get("User").expect("should exist");

    let users = object_type.field_values_store.iter().map(|(_, field_value_store)| {
        let id = match field_value_store.get("id").expect("should exist") {
            FieldValue::Scalar(field_value_scalar_option) => match field_value_scalar_option.as_ref().expect("should exist") {
                FieldValueScalar::String(id) => ID(id.to_string()),
                _ => panic!("should not happen")
            },
            _ => panic!("should not happen")
        };

        let username = match field_value_store.get("username").expect("should exist") {
            FieldValue::Scalar(field_value_scalar_option) => match field_value_scalar_option.as_ref().expect("should exist") {
                FieldValueScalar::String(username) => username.to_string(),
                _ => panic!("should not happen")
            },
            _ => panic!("should not happen")
        };

        // This example does not show you how to resolve relations
        // You would need to go and get the blog posts by using information in the blogPosts FieldValue
        // and retrieving the records from the BlogPost object type
        let blog_posts = vec![];

        return User {
            id,
            username,
            blogPosts: blog_posts
        };
    }).collect();

    return users;
}
}

Custom async_graphql integration

Sudograph is built on the bedrock of async_graphql. async_graphql is the library providing most of the fundamental GraphQL functionality, includes resolving queries and mutations. Sudograph is mostly tasked with transforming your provided schema into the Rust data structures that async_graphql expects. Though Sudograph is designed to provide a lot of functionality for you automatically, you may find the need to dig deeper and integrate with async_graphql more directly.

The automatically generated graphql_query and graphql_mutation functions create an async_graphql schema data structure. These functions also accept queries and mutations and execute them against that schema. You can always generate your own functions (see Sudograph Settings) and use async_graphql directly if you wish. You can see how Sudograph creates an async_graphql schema here (look for the graphql_query and graphql_mutation functions).

You can write your own async_graphql types as well. Basically, if you understand how Sudograph is simply generating async_graphql Rust data structures, including queries and mutations, you will be able to figure out how to augment the schema yourself. This could be very useful if you are waiting on Sudograph to implement a feature for you, as you might be able to implement it yourself right away with minimal effort.

Limitations

  • No custom scalars, only Blob, Boolean, Date, Float, ID, Int, JSON, and String are available
  • No custom input objects, only custom input scalars allowed in custom resolvers
  • Each schema is limited to a single canister with ~4 GB of storage
  • Very inneficient querying
  • No automatic migrations, once you deploy the schema is final unless you implement your own migrations
  • No authorization at the schema level, deal with it through your own custom authorization at the canister function level
  • No automated tests
  • No subscriptions
  • No transactions