gRPC API metadata, authentication

Hi,

If I’m communicating with a node on the testnet via the gRPC API, I’m using the metadata:

{
    "authentication": "rpcadmin"
}

But what if I want to communicate with a node on the mainnet - what authentication value should I be using then?

The authentication token is set when starting the node via the command line option --rpc-server-token or the environment variable CONCORDIUM_NODE_RPC_SERVER_TOKEN, with the default value being rpcadmin (independent of which network it is on). So unless you have configured your node otherwise, the same metadata is likely to work.

Hmm OK, I thought it had something to do with security. How does one secure a node then - or is the general idea to let all nodes be available to anyone?
What is the idea of this metadata attribute?

You can set the token to restrict access, and that’s probably worth doing if you make your node publicly accessible via gRPC. (One caveat is that I don’t think you can set the token for the desktop wallet.) However, generally I would recommend against making your node accessible publicly on the gRPC port altogether. This is what we do at Concordium. Some of our nodes are accessible on gRPC via our VPN, or are queried indirectly via the network dashboard, for instance. But none of the nodes we run are publicly accessible via gRPC.

General questions about how to do different things with the gRPC API, is this here I should direct those or is that some other place?

I would like to know for one, how to get the transaction history for a smart contract. i.e the piggy bank, insert transactions, failed and successful attempts smash etc.

I read about how users of blockchain technology in their applications, like Walmart can easily search back in the chain to see when something happened or where something i.e. a product was produced and by who.

You can do this in two ways.
One is that you use the getBlockSummary endpoint to query all the blocks and see if your contract is affected by any transactions in that block. This would work reasonably well if the contract you are interacting with has a lot of activity. It does not work well for getting historical data since you would have to traverse many blocks in the past.

The node can be configured to do “transaction logging” where it will create an index of affected smart contracts and accounts. Then you can look up which transactions were involved in your contract in the past. See https://github.com/Concordium/concordium-node/blob/main/docs/transaction-logging.md#contract-transaction-index for details of the transaction logging.

Note that both solutions rely on you running a node. We do not provide any public endpoint to query this information at the moment.

Hmm, this seems to me like a reverted way of doing such a thing.
If I wish to seek history regarding a specific say product (contract), looking in all boxes to see if something is there seems far from optimal.
I would assume that it was possible to search for this history with some handle like a product number (or i.e. a contract id) and then get only transaction history that has to do with this specific contract… or am I misunderstanding this whole thing?
If this isn’t a possible thing to do, I think I’m failing to see the great advantage of this other than the validating properties of the chain.

I am running a node, this is how I communicate with the chain.

In regard to the logging thing, is this the way to go if you want to be able to search for stuff like described above? If this is the case, I assume you’d have to run at least 2 nodes, if your node is also a baker node, not to slow it down and miss opportunities.
By doing that, one could gain access to all finalized block data, through PostgreSQL?

Yes, if you do it when you need the information. The node does not provide all indices for all kinds of situations. It provides some way of retrieving data, but it cannot cover all usecases. For the rest you must build it yourself using the available API.

For specific situations the intention is that you would build an external service that runs alongside the node, queries it, and maintains the specific information you want for quick retrieval.

Yes, you would ideally run a separate node for the logging. With low traffic it won’t matter, but with high load it is a significant burden.

I realize that this might be a bit beyond the scope of this forum.

So if I’m understanding you correctly, as for the data on the chain, it only really has to do with the “truthfulness” of things/claims, rather information about things?

Say product-A produced at location-B as part of batch-C made out of materials-D at a cost of cost-E. If we’re trying to track this product from factory to the end consumer, the chain would in this case, hold a reference to the product, a reference to the current location and a date of when it arrived at the current location.
All data about A, B, C, D and E would be information you would have to find elsewhere in other systems not part of the chain.

In this thought scenario the chain only confirms the truthfulness of the current location of the product and when it arrived at this location.

Does this make sense or am I completely off here? - I’m asking because I’m somewhat groping in regard to finding real good use cases for this blockchain.

Yes, this is generally how things should be done. The chain should not really be used as a general data store, but can be used to record that things happened. Typically, a reference to some off-chain data would be represented by a hash of the data, since this cannot be forged (assuming the security of the hash function, of course).

OK, so if we go back to the piggy bank. I have an instance on the testnet that I have inserted GTU into a few times and smashed.
If I wanted to find out say, when GTU were inserted or who it was doing it, how would I go about doing this?

In regard to the gRPC API that is.

Would this require node logging?

Yes. If you run the node with transaction logging, then you can look up in the database all transactions affecting a particular smart contract instance, and see what they were and when they occurred.

This would be done with an query to the PostgreSQL database, rather than a gRPC query to the node itself. You can lookup the transactions affecting a particular smart contract instance with the following query (where $1 and $2 are substituted with the index and subindex of the contract instance):

SELECT summaries.timestamp, summaries.summary
FROM cti JOIN summaries ON ati.summary = summaries.id
WHERE cti.index = $1 AND cti.subindex = $2
ORDER BY cti.id

The summary is a JSON object that describes the transaction.

Am I right, in regard to the piggy bank, if I say that only method calls (and there by transaction) that are “successful” (allowed by the logic), would be recorded on the chain and information about everything else (i.e. failed attempts) would only be available through node logging?

What I mean (assume) is that there will be no record on the chain of attempts to insert GTU into an already smashed piggy bank or attempts to smash a piggy bank that is already smashed or attempts to smash a piggy bank by someone who isn’t the owner - in short, everything that the logic of the smart contract doesn’t allow or will reject/deny.

Is this a correct assumption?

An if so, also that these failed attempts will only be available through node logging on the node through which they were made, a.k.a. I would not be able to see on my node what other people attempted (and failed) to do through other nodes.

This assumption is incorrect.
There is a distinct difference between a transaction being successful, and it being included on the chain.

It’s generally only if the signature is incorrect, the allowed energy is insufficient, or the sending account cannot cover the cost of the energy that a transaction will not be included, even if it fails.

The failed attempts to interact with the piggy bank will result in the smart contract execution failing, but they will still appear on the chain as failed transactions (with the error codes).

So these attempts would be available in the database query that Thomas specified earlier.

If I understood correctly about the database mentioned earlier, this has to do with node logging, aka data that my node writes to it’s own local database aka not something that is distributed to all nodes.

If this is in fact the case, I doubt that I would be able to find any data about some failed attempts to interact with a smart contract on the chain through a different node - say, someone was to try to smash my piggy bank on the chain and performing this attempt through their own node (which isn’t my node).
How would I find any information about that in my local node logging database?

So we have to distinguish between data available on the chain (aka available to anyone from anywhere) and data available to just me due to my node being set to perform logging. I assume that node logging only has to do with actions through this particular node.

If all this above is true, my question still remains - would I be able to find information about others’ attempts to interact with a smart contract on the chain (through other nodes), by asking the chain, not my local database (since I assume no information about such attempts would reside here).

The transaction logging that you can enable on a node, is not logging the transactions that you are sending to it, but logging the transactions that are finalized on the chain.

So the data available from transaction logging is not “available to just you”, but is the data that is on the chain.

The advantage of having this database is that it indexes this information, so you can easily look up transactions affecting a specific contract or account, instead of checking each block with getBlockSummary.

I have PostgreSQL installed and I have also created a database and run the script that creates the required tables.
I can login to the database with pgAdmin and everything looks fine here.

I have changed “/etc/systemd/system/concordium-testnet-node-collector.service.d/override.conf” with the following:

Environment='CONCORDIUM_NODE_TRANSACTION_OUTCOME_LOGGING=true'
Environment='CONCORDIUM_NODE_TRANSACTION_OUTCOME_LOGGING_NAME=postgres'
Environment='CONCORDIUM_NODE_TRANSACTION_OUTCOME_LOGGING_HOST=127.0.0.1'
Environment='CONCORDIUM_NODE_TRANSACTION_OUTCOME_LOGGING_PORT=5432'
Environment='CONCORDIUM_NODE_TRANSACTION_OUTCOME_LOGGING_USERNAME=postgres'
Environment='CONCORDIUM_NODE_TRANSACTION_OUTCOME_LOGGING_PASSWORD=*********'

But when I restart the node, I don’t see any new session on the postgre sql database.
Is this not the correct file to edit?

I think it would need to be in “/etc/systemd/system/concordium-testnet-node.service.d/override.conf” (i.e. without the “-collector”). The ubuntu setup installs two services: one for the node itself and a second, the collector, which monitors the node and reports to the network dashboard. The logging configuration needs to be set for the former.

Hmm my setup doesn’t seem to have this path, see picture.

I believe the folder “concordium-node-collector.service.d” is from earlier when testnet and mainnet services shared the same name.

You should be able to use

sudo systemctl edit concordium-testnet-node.service

to create and edit this file. (For more information about the configuration, see here.)