Launching DApp on Concordium Mainnet - Docker Setup

Hello Team,

We are building an dApp application upon concordium wallet. We are pushing it to Mainnet and we wanted to create “grpc-ip” node, using Docker.

We wanted some help to understand the working of Docker file and hosting the file in the server.

Do you have any specific questions that we can help with?

As a baseline you just need to run a node and enable GRPC2, which is enabled by default in all distributions.

In order to help you more, can you please provide more detailed information about what steps are unclear to you.

Thank you Team,

Currently we are using the same yaml configuration from the documentation.

The .yaml code is attached below. So when we execute the command “docker compose -f mainnet-node.yaml up” command, it runs for about a several seconds.

If we are not wrong the node running is at http://mainnet-node:2000 , when we use that in “concordium-client module deploy ./dist/module.wasm.v1 --sender “key” --name Market-NFT3 --grpc-port 20000 --grpc-ip mainnet-node “

It throws an error as “Error: The GRPC request failed: Cannot establish connection to GRPC endpoint.”

# This is an example configuration for running the mainnet node
version: '3'
services:
  mainnet-node:
    platform: "linux/amd64"
    container_name: mainnet-node
    image: concordium/mainnet-node:latest
    pull_policy: always
    network_mode: bridge
    environment:
      # Environment specific configuration
      # The url where IPs of the bootstrap nodes can be found.
      - CONCORDIUM_NODE_CONNECTION_BOOTSTRAP_NODES=bootstrap.mainnet.concordium.software:8888
      # Where the genesis is located
      - CONCORDIUM_NODE_CONSENSUS_GENESIS_DATA_FILE=/mainnet-genesis.dat
      # The url of the catchup file. This speeds up the catchup process.
      - CONCORDIUM_NODE_CONSENSUS_DOWNLOAD_BLOCKS_FROM=https://catchup.mainnet.concordium.software/blocks.idx
      # General node configuration Data and config directories (it's OK if they
      # are the same). This should match the volume mount below. If the location
      # of the mount inside the container is changed, then these should be
      # changed accordingly as well.
      - CONCORDIUM_NODE_DATA_DIR=/mnt/data
      - CONCORDIUM_NODE_CONFIG_DIR=/mnt/data
      # The port on which the node will listen for incoming connections. This is a
      # port inside the container. It is mapped to an external port by the port
      # mapping in the `ports` section below. If the internal and external ports
      # are going to be different then you should also set
      # `CONCORDIUM_NODE_EXTERNAL_PORT` variable to what the external port value is.
      - CONCORDIUM_NODE_LISTEN_PORT=8888
      # Desired number of nodes to be connected to.
      - CONCORDIUM_NODE_CONNECTION_DESIRED_NODES=5
      # Maximum number of __nodes__ the node will be connected to.
      - CONCORDIUM_NODE_CONNECTION_MAX_ALLOWED_NODES=10
      # Address of the V2 GRPC server.
      - CONCORDIUM_NODE_GRPC2_LISTEN_ADDRESS=0.0.0.0
      # And its port which has to be the same as in `CONCORDIUM_NODE_COLLECTOR_GRPC_HOST`
      # that is defined for the collector.
      - CONCORDIUM_NODE_GRPC2_LISTEN_PORT=20000
      # Maximum number of __connections__ the node can have. This can temporarily be more than
      # the number of peers when incoming connections are processed. This limit
      # ensures that there cannot be too many of those.
      - CONCORDIUM_NODE_CONNECTION_HARD_CONNECTION_LIMIT=20
      # Number of threads to use to process network events. This should be
      # adjusted based on the resources the node has (in combination with
      # `CONCORDIUM_NODE_RUNTIME_HASKELL_RTS_FLAGS`) below.
      - CONCORDIUM_NODE_CONNECTION_THREAD_POOL_SIZE=2
      # The bootstrapping interval in seconds. This makes the node contact the
      # specified bootstrappers at a given interval to discover new peers.
      - CONCORDIUM_NODE_CONNECTION_BOOTSTRAPPING_INTERVAL=1800
      # Haskell RTS flags to pass to consensus. `-N2` means to use two threads
      # for consensus operations. `-I0` disables the idle garbage collector
      # which reduces CPU load for non-validator nodes.
      - CONCORDIUM_NODE_RUNTIME_HASKELL_RTS_FLAGS=-N2,-I0
    entrypoint: ["/concordium-node"]
    # Exposed ports. The ports the node listens on inside the container (defined
    # by `CONCORDIUM_NODE_LISTEN_PORT` and `CONCORDIUM_NODE_RPC_SERVER_PORT`)
    # should match what is defined here. When running multiple nodes the
    # external ports should be changed so as not to conflict.
    # In the mapping below, the first port is the `host` port, and the second
    # port is the `container` port. When the `container` port is changed the
    # relevant environment variable listed above must be changed as well. For
    # example, changing `10000:10000` to `10000:13000` would mean that
    # `CONCORDIUM_NODE_RPC_SERVER_PORT` should be set to `13000`. Otherwise
    # the node's gRPC interface will not be available from the host.
    ports:
    - "8888:8888"
    - "20000:20000"
    volumes:
    # The node's database should be stored in a persistent volume so that it
    # survives container restart. In this case we map the **host** directory
    # /var/lib/concordium-mainnet to be used as the node's database directory.
    - /var/lib/concordium-mainnet:/mnt/data
  # The collector reports the state of the node to the network dashboard. A node
  # can run without reporting to the network dashboard. Remove this section if
  # that is desired.
  mainnet-node-collector:
    platform: "linux/amd64"
    container_name: mainnet-node-collector
    image: concordium/mainnet-node:latest
    pull_policy: always
    network_mode: bridge
    environment:
      # Settings that should be customized by the user.
      - CONCORDIUM_NODE_COLLECTOR_NODE_NAME=docker-test-mainnet
      # Environment specific settings.
      - CONCORDIUM_NODE_COLLECTOR_URL=https://dashboard.mainnet.concordium.software/nodes/post
      # Collection settings.
      # How often to collect the statistics from the node.
      - CONCORDIUM_NODE_COLLECTOR_COLLECT_INTERVAL=5000
      # The URL where the node can be reached. Note that this will use the
      # docker created network which maps `mainnet-node` to the internal IP of
      # the `mainnet-node`. If the name of the node service is changed from
      # `mainnet-node` then the name here must also be changed.
      # The port also has to be the same as in `CONCORDIUM_NODE_GRPC2_LISTEN_PORT`
      # that is defined for the node.
      - CONCORDIUM_NODE_COLLECTOR_GRPC_HOST=http://mainnet-node:20000
    entrypoint: ["/node-collector"]

This URL is only from inside docker. I see your network mode is bridged, so from outside docker, from your host, you can probably do

concordium-client --grpc-ip 127.0.0.1 ...

Thank you Team,

We were able to get the docker image up and running. We are hosting the yaml file in Azure VM with ubuntu in it. We have installed all the required docker files.

We have added the DNS with SSL. So do we have to add the inbound network rules with port 20000 and is there any specific way to use it in building the smart contract ?

Example
concordium-client module deploy ./dist/module.wasm.v1 --sender “key” --name CIS2-Multi --grpc-port 20000 --grpc-ip node.testnet.concordium.com

I don’t quite understand what you mean by using it in “building smart contract”. The concordium-client command is for deploying a module.

Do you mean how to access the node via SSL? Do you do SSL-offloading or did you set up the node with TLS certificates?

Thanks for the response team,

So we have created a VM in azure with DNS address, say “http://concordium.azure.com”, inside which we are running docker with the command “docker-compose -f mainnet.yaml up -d”, “-d” because it has to run in the background.

Now how can we use that particular DNS address i.e “http://concordium.azure.com” during the deployment of smart contract as shown in the above example in the --grpc-ip?

So if you have routed the ports correctly then you would

concordium-client --grpc-ip concordium.azure.com --grpc-port 20000 ...

Thank you ! This did help.

Lastly, is there a way to check if the node is running ?

Because, we are running a verifier, when executed it with this node, it listens to the data. But during the deployment of the contract it says “Error: A GRPC error occurred: GRPC error: account or block not found.”

With concordium client, an easy check is to do

concordium-client consensus status

and see if it responds, and also how up to date it is.

If you’ve recently started the node, perhaps it is not caught up yet, which is why you could be seeing that error, which indicates you are trying to access an account that does not exist.

Thank you!

We hosted the docker container yesterday.

When executed
concordium-client consensus status. The last updated block time shows today.

Hello team,

We are running our node in a VM, the output of the command “docker ps -a” is attached below.

As mentioned earlier, we are using Azure and its DNS address is “signax-concordium.southafricanorth.cloudapp.azure.com”. The Screenshot of the Mainnet node named “signax-concordium-main” in the CCDScan is attached below.

So now when executed the command “concordium-client module deploy ./dist/module.wasm.v1 —sender 4b4nA5QQSws1ZXdYjnteu2TV49pce8DDRdw9mze1V82fq2HUfX --name Market-NFT3 --grpc-port 20000 --grpc-ip signax-concordium.southafricanorth.cloudapp.azure.com”, there is an error "Error: A GRPC error occurred: GRPC error: account or block not found“.

Also, when we run in testnet, with grpc-ip “node.testnet.concordium.com” we get error. The screenshot is attached below

Mainnet.yaml is attached below.

version: "3"
services:
  mainnet-node:
    platform: "linux/amd64"
    container_name: mainnet-node
    image: concordium/mainnet-node:latest
    pull_policy: always
    network_mode: bridge
    environment:
      - CONCORDIUM_NODE_CONNECTION_BOOTSTRAP_NODES=bootstrap.mainnet.concordium.software:8888

      - CONCORDIUM_NODE_CONSENSUS_GENESIS_DATA_FILE=/mainnet-genesis.dat

      - CONCORDIUM_NODE_CONSENSUS_DOWNLOAD_BLOCKS_FROM=https://catchup.mainnet.concordium.software/blocks.idx

      - CONCORDIUM_NODE_DATA_DIR=/mnt/data
      - CONCORDIUM_NODE_CONFIG_DIR=/mnt/data

      - CONCORDIUM_NODE_LISTEN_PORT=8888

      - CONCORDIUM_NODE_CONNECTION_DESIRED_NODES=5

      - CONCORDIUM_NODE_CONNECTION_MAX_ALLOWED_NODES=10

      - CONCORDIUM_NODE_GRPC2_LISTEN_ADDRESS=0.0.0.0

      - CONCORDIUM_NODE_GRPC2_LISTEN_PORT=20000

      - CONCORDIUM_NODE_CONNECTION_HARD_CONNECTION_LIMIT=20

      - CONCORDIUM_NODE_CONNECTION_THREAD_POOL_SIZE=2

      - CONCORDIUM_NODE_CONNECTION_BOOTSTRAPPING_INTERVAL=1800

      - CONCORDIUM_NODE_RUNTIME_HASKELL_RTS_FLAGS=-N2,-I0
    entrypoint: ["/concordium-node"]

    ports:
      - "8888:8888"
      - "20000:20000"
    volumes:
      - /var/lib/concordium-mainnet:/mnt/data

  mainnet-node-collector:
    platform: "linux/amd64"
    container_name: mainnet-node-collector
    image: concordium/mainnet-node:latest
    pull_policy: always
    network_mode: bridge
    environment:
      - CONCORDIUM_NODE_COLLECTOR_NODE_NAME=docker-test-mainnet

      - CONCORDIUM_NODE_COLLECTOR_URL=https://dashboard.mainnet.concordium.software/nodes/post

      - CONCORDIUM_NODE_COLLECTOR_COLLECT_INTERVAL=5000

      - CONCORDIUM_NODE_COLLECTOR_GRPC_HOST=http://signax-concordium.southafricanorth.cloudapp.azure.com:20000
    
    entrypoint: ["/node-collector"]

Are you sure your node is up to the head of the chain? From your ccdscan screenshot it seems you are still 10M blocks behind? If so, that explains possibly why you get a Block not Found error?

1 Like

Thanks for responding.

What could be the workaround to it? Is there any way to resolve the mentioned issue?

You should wait until the node is caught up. That should resolve your issue.

Hi @abizjak , thank you for your reply!

It’s been a few days now and the mainnet node is not yet live. Attaching the screenshot from the Docker where it shows it is live since the last 4 days.

Your guidance would be helpful on this. Attaching the ''Docker logs Mainnet-node" screenshot and the yaml file for your reference.

version: "3"
services:
  mainnet-node:
    platform: "linux/amd64"
    container_name: mainnet-node
    image: concordium/mainnet-node:latest
    pull_policy: always
    network_mode: bridge
    environment:
      - CONCORDIUM_NODE_CONNECTION_BOOTSTRAP_NODES=bootstrap.mainnet.concordium.software:8888

      - CONCORDIUM_NODE_CONSENSUS_GENESIS_DATA_FILE=/mainnet-genesis.dat

      - CONCORDIUM_NODE_CONSENSUS_DOWNLOAD_BLOCKS_FROM=https://catchup.mainnet.concordium.software/blocks.idx

      - CONCORDIUM_NODE_DATA_DIR=/mnt/data
      - CONCORDIUM_NODE_CONFIG_DIR=/mnt/data

      - CONCORDIUM_NODE_LISTEN_PORT=8888

      - CONCORDIUM_NODE_CONNECTION_DESIRED_NODES=5

      - CONCORDIUM_NODE_CONNECTION_MAX_ALLOWED_NODES=10

      - CONCORDIUM_NODE_GRPC2_LISTEN_ADDRESS=0.0.0.0

      - CONCORDIUM_NODE_GRPC2_LISTEN_PORT=20000

      - CONCORDIUM_NODE_CONNECTION_HARD_CONNECTION_LIMIT=20

      - CONCORDIUM_NODE_CONNECTION_THREAD_POOL_SIZE=2

      - CONCORDIUM_NODE_CONNECTION_BOOTSTRAPPING_INTERVAL=1800

      - CONCORDIUM_NODE_RUNTIME_HASKELL_RTS_FLAGS=-N2,-I0
    entrypoint: ["/concordium-node"]

    ports:
      - "8888:8888"
      - "20000:20000"
    volumes:
      - /var/lib/concordium-mainnet:/mnt/data

  mainnet-node-collector:
    platform: "linux/amd64"
    container_name: mainnet-node-collector
    image: concordium/mainnet-node:latest
    pull_policy: always
    network_mode: bridge
    environment:
      - CONCORDIUM_NODE_COLLECTOR_NODE_NAME=signax-concordium-mainnet

      - CONCORDIUM_NODE_COLLECTOR_URL=https://dashboard.mainnet.concordium.software/nodes/post

      - CONCORDIUM_NODE_COLLECTOR_COLLECT_INTERVAL=5000

      - CONCORDIUM_NODE_COLLECTOR_GRPC_HOST=http://signax-concordium.southafricanorth.cloudapp.azure.com:20000
    entrypoint: ["/node-collector"]

Can you show the output of concordium-client consensus status?

Sure @abizjak .

Here is the output of the command

Ok, your node looks caught up.

To be clear, this command is still failing

concordium-client module deploy ./dist/module.wasm.v1 —sender 4b4nA5QQSws1ZXdYjnteu2TV49pce8DDRdw9mze1V82fq2HUfX --name Market-NFT3 --grpc-port 20000 --grpc-ip signax-concordium.southafricanorth.cloudapp.azure.com

with the error "Error: A GRPC error occurred: GRPC error: account or block not found“ ?

Yes its failing. But this says “Error: The GRPC request failed: Cannot establish connection to GRPC endpoint.”. We have kept our port open in Azure VM