gRPC API questions

So, this isn’t about the client which the title implies.

Since I’m interested in working with C# and there currently is no SDK being worked on or in the pipeline from what I can tell, I need to talk to the gRPC API directly.

So, where should I direct my questions in regard to this subject?

I’m interested in knowing how to do stuff like:

  • deploy a module/smart contract to the chain?
  • instantiate a smart contract on the chain
  • call a method on a smart contract on the chain

Unfortunately, the documentation is currently a bit patch on these details. The operations that you describe are transactions which can be sent to the chain via the SendTransaction gRPC endpoint. (You can then use GetTransactionStatus to monitor the outcome.)

Now the issue becomes generating the data that SendTransaction requires, which is a serialized transaction (including the signature(s)). The bluepaper details the serialization format in section 24.4. You need to send a correctly formatted BlockItem. To generate the signature, you need to use the account key of the sender (technically, an account can have multiple keys that may be used in the signature, but most commonly there is only one) and sign the encoded payload with it in the Ed25519 signature scheme.

The link seems to be dead.

Hi Petlan,

We actually have some documentation for our gRPC interface:

Let us know if anything is unclear, then we’ll update the documentation :blush:

Have a great day!
/ Kasper

Hi again.

I was trying to look at these - most of the methods require a block hash, though I’m not sure where to get this hash.
If I query the postgresql db, I get a hash as part of the “left” attribute object, but using one of these hashes doesn’t yield any result with the gRPC API methods.

It seems like regardless of which block hash I send in the request (i.e. method GetBlockInfo), the response is always:

{
    "value": null
}

If I send this:

{
  "block_hash": "bf713572e026d0e76ea4b5770b002e5e62b443181bbe39cefa160a5d81e9b356"
}

Which is the hash of the last finalized block (at the time of writing), the result is the above null response.

UPDATE 1

Now I’m getting a response though not the response I’m expecting:

Status(StatusCode=\"Internal\", Detail=\"Bad gRPC response. Response protocol downgraded to HTTP/1.1.\")

The intelliSense in my environment states that I can see: https://gitlab.com/Concordium/notes-wiki/wikis/Consensus-queries#getblockinfo

However, this location doesn’t exist.

UPDATE 2

OK, I found an issue within the code now. My attempt to deserialize (in C#) the JSON object:

{
  "block_hash": "bf713572e026d0e76ea4b5770b002e5e62b443181bbe39cefa160a5d81e9b356"
}

To an object of the type BlockHash (generated from the proto file) fails - probably because the actual attribute within such an object isn’t called “block_hash” like the JSON indicates but in fact “BlockHash_” if you want to set it’s value - and then if you serialize it after setting this value, the attribute is called “blockHash”… 3 different names for the same attribute… I’m confused.

The odd thing for me now is, that I can call the GetBlockInfo method with the above JSON object if I use BloomRPC (RPC client application) and that works.

But I can’t do the same from my own code, unless I extract the string contained in the “block_hash” attribute and create a new BlockHash object setting it’s “BlockHash_” attribute to this string - and then call the method with that BlockHash object as request, which isn’t dynamic at all.

What I wanted was to be able to send these JSON objects (as described in the proto file) to my application and have it call the appropriate gRPC API methods using objects created from this JSON. But if the deserialization fails because of what I just described, I don’t see how that will be possible.

Am I missing something?

If I send:

{
        "BlockHash_": "bf713572e026d0e76ea4b5770b002e5e62b443181bbe39cefa160a5d81e9b356"
}

then it works, because the code is now is able to successfully deserialize the JSON into a BlockHash object… Why is this? - Why doesn’t the generated BlockHash class look like it does in the proto file?

Can anyone explain to me why Visual Studio, when generating C# code from the .proto file, changes the way certain objects look?
It makes it impossible from outside the code, to know how to create a serialized JSON request, when the attributes change names like that.

In this example it is the BlockHash object. Notice in the .cs file how the one attribute named “block_hash” in the .proto file, becomes “BlockHash_” in the .cs file.

Example:
In the .proto file:

message BlockHash {
  string block_hash = 1;
}

In the .cs file:

public sealed partial class BlockHash : pb::IMessage<BlockHash>
  #if !GOOGLE_PROTOBUF_REFSTRUCT_COMPATIBILITY_MODE
      , pb::IBufferMessage
  #endif
  {
    private static readonly pb::MessageParser<BlockHash> _parser = new pb::MessageParser<BlockHash>(() => new BlockHash());
    private pb::UnknownFieldSet _unknownFields;
    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public static pb::MessageParser<BlockHash> Parser { get { return _parser; } }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public static pbr::MessageDescriptor Descriptor {
      get { return global::gRPC_ASPNET5.ConcordiumP2PRpcReflection.Descriptor.MessageTypes[12]; }
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    pbr::MessageDescriptor pb::IMessage.Descriptor {
      get { return Descriptor; }
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public BlockHash() {
      OnConstruction();
    }

    partial void OnConstruction();

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public BlockHash(BlockHash other) : this() {
      blockHash_ = other.blockHash_;
      _unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public BlockHash Clone() {
      return new BlockHash(this);
    }

    /// <summary>Field number for the "block_hash" field.</summary>
    public const int BlockHash_FieldNumber = 1;
    private string blockHash_ = "";
    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public string BlockHash_ {
      get { return blockHash_; }
      set {
        blockHash_ = pb::ProtoPreconditions.CheckNotNull(value, "value");
      }
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public override bool Equals(object other) {
      return Equals(other as BlockHash);
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public bool Equals(BlockHash other) {
      if (ReferenceEquals(other, null)) {
        return false;
      }
      if (ReferenceEquals(other, this)) {
        return true;
      }
      if (BlockHash_ != other.BlockHash_) return false;
      return Equals(_unknownFields, other._unknownFields);
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public override int GetHashCode() {
      int hash = 1;
      if (BlockHash_.Length != 0) hash ^= BlockHash_.GetHashCode();
      if (_unknownFields != null) {
        hash ^= _unknownFields.GetHashCode();
      }
      return hash;
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public override string ToString() {
      return pb::JsonFormatter.ToDiagnosticString(this);
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public void WriteTo(pb::CodedOutputStream output) {
    #if !GOOGLE_PROTOBUF_REFSTRUCT_COMPATIBILITY_MODE
      output.WriteRawMessage(this);
    #else
      if (BlockHash_.Length != 0) {
        output.WriteRawTag(10);
        output.WriteString(BlockHash_);
      }
      if (_unknownFields != null) {
        _unknownFields.WriteTo(output);
      }
    #endif
    }

    #if !GOOGLE_PROTOBUF_REFSTRUCT_COMPATIBILITY_MODE
    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    void pb::IBufferMessage.InternalWriteTo(ref pb::WriteContext output) {
      if (BlockHash_.Length != 0) {
        output.WriteRawTag(10);
        output.WriteString(BlockHash_);
      }
      if (_unknownFields != null) {
        _unknownFields.WriteTo(ref output);
      }
    }
    #endif

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public int CalculateSize() {
      int size = 0;
      if (BlockHash_.Length != 0) {
        size += 1 + pb::CodedOutputStream.ComputeStringSize(BlockHash_);
      }
      if (_unknownFields != null) {
        size += _unknownFields.CalculateSize();
      }
      return size;
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public void MergeFrom(BlockHash other) {
      if (other == null) {
        return;
      }
      if (other.BlockHash_.Length != 0) {
        BlockHash_ = other.BlockHash_;
      }
      _unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
    }

    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    public void MergeFrom(pb::CodedInputStream input) {
    #if !GOOGLE_PROTOBUF_REFSTRUCT_COMPATIBILITY_MODE
      input.ReadRawMessage(this);
    #else
      uint tag;
      while ((tag = input.ReadTag()) != 0) {
        switch(tag) {
          default:
            _unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
            break;
          case 10: {
            BlockHash_ = input.ReadString();
            break;
          }
        }
      }
    #endif
    }

    #if !GOOGLE_PROTOBUF_REFSTRUCT_COMPATIBILITY_MODE
    [global::System.Diagnostics.DebuggerNonUserCodeAttribute]
    [global::System.CodeDom.Compiler.GeneratedCode("protoc", null)]
    void pb::IBufferMessage.InternalMergeFrom(ref pb::ParseContext input) {
      uint tag;
      while ((tag = input.ReadTag()) != 0) {
        switch(tag) {
          default:
            _unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, ref input);
            break;
          case 10: {
            BlockHash_ = input.ReadString();
            break;
          }
        }
      }
    }
    #endif

  }

Hi petlan,

Are you still focused on deploying, initializing, and updating smart contracts, or has your focus shifted? If you are still on that path, I believe most of the information needed is described in our gRPC for smart contracts docs. And if not, then please let us know what’s missing.

Please note that most of our gRPC calls return a JsonResponse as opposed to a more structured type (e.g. like NodeInfoResponse), and this means that you need to parse the returned JSON. And that you likely will need to define some data types for the return values yourself.

Regarding the GetBlockInfo call:

  • The doc comments still link to our deprecated repo, we’ll update it. Sorry about that. The method is documented in our general gRPC documentation.
  • It would make sense to get the current best block with the GetConsensusStatus call and then get the block info for that block.
    • GetConsensusStatus returns a JsonResponse, with the field value, which contains the JSON. The JSON is an object with a number of fields, including the bestBlock field. It is the value of this field you need for construction the BlockHash type.
    • In other words, since GetConsensusStatus does not return a BlockHash type, you will need to write some code for extracting the bestBlock field and creating a BlockHash from it.
  • You can then call GetBlockInfo with the constructed BlockHash.
    • Again, this method returns a JsonResponse, which you will need to parse for your purposes.

Regarding the BlockHash_:

  • First off, I don’t think you’ll need the generated JSON instance for this BlockHash as we never return a BlockHash and our API does not expect a BlockHash as JSON.
  • Protobuf naturally has some rules for how it generates code from a proto file, and for C# that apparently results in this name. I believe the underscore is added to avoid a naming conflict (or at least confusion), between the class BlockHash and its property BlockHash_. I can see that appending an underscore is the default for private fields, but it only does so for properties, when there is a potential conflict. As an example, look at PeerListResponse, whose private fields have underscores, but its properties do not, presumably because the fields are named differently than the class itself (Peers and PeerType).

Since you are basically building a partial C# SDK, it might be useful for you to look at our other SDKs.

  • Concordium Java SDK (Java should be relatively easy for you to read, but the smart contract interaction is not implemented yet).
  • Concordium Node JS SDK (Fairly well-documented and with support for smart contracts).
  • Concordium Rust SDK (Should support smart contracts, but it is still WIP and mostly undocumented).

I hope this information allows you to progress.
Feel free to reach out again, if you have more questions.

Best regards,
Kasper

The JsonResponse type is not a problem. I don’t understand why they are used though, since they are basically a JSON object with one attribute “Value” which contains another JSON object that is “stringified”, meaning it’s a string full of backslashes everywhere… Why not just return that object instead? - but again, this is not a problem, I’m handling that.

The problem is the “BlockHash_” attribute. I don’t see any apparent naming conflict since the real attribute of that object (in the proto) is called “block_hash” - nothing else it called that.

The problem arise when I want to create a REST API that calls the gRPC API. I wish to call my own API with a request object that contains some attributes of which one is the request object for the gRPC API.
Many of the gRPC API methods take a BlockHash object as request { “block_hash”: “some-hash-value” }, that is why I was trying to work with that first, since it is also a very simple object with only one attribute.

So the request for my own API could look like this:

{
   "attribute_a":"some text",
   "attribute_b":true,
   "attribute_c":300,
   "grpc_request":{
      "block_hash":"some-hash-value"
   },
   "attribute_d":256.66
}

The attributes a, b, c, and d are attributes for my own API, but the “grpc_request” attribute is the request object for the gRPC API.
So in order for my API to call the gRPC API, I need to deserialize this object in order to perform the gRPC method call with the correct type of argument - and this is where things go wrong.
Because the value of the above “grpc_request” attribute, does not turn into that of a BlockHash object as described in the proto which was { “block_hash”: “some-hash-value” }, which means that my attempt to make that JSON into a BlockHash object results in { } and empty object.

If however, I send this instead, then it works.

{
   "attribute_a":"some text",
   "attribute_b":true,
   "attribute_c":300,
   "grpc_request":{
      "BlockHash_":"some-hash-value"
   },
   "attribute_d":256.66
}

Unfortunately this breaks the whole idea of what I’m doing, since I need to redefine the arguments for the gRPC API and this was just a request with one attribute… what happens when the request is an object with multiple attributes of which some are objects etc. etc.

At the same time, I tried to send a request { “block_hash”: “some-hash-value” } to the gRPC API, via the BloomRPC client, and that works.
So it seems that the problem lais with the way Visual Studio generates C# code from .proto files, and I’m not sure what to do about it, if I can do anything about it at all.

Yes, the JsonResponses are not ideal, and we would like to change them at some point. However, there are a number of internal and external tools that rely on the current gRPC interface, so we must keep backwards compatibility.

Right, I see a few options that you can try (going from most ideal to less ideal).

  1. Try using lowerCamelCase, e.g. blockHash. According to their documentation, it should use lowerCamelCase for JSON fields in objects. At least if you are using proto3, so make sure to use the newest version. I am not sure why BlockHash_ even works for you at the moment.

  2. You can try modifying the .proto file and use the json_name attribute:

    message BlockHash {
       string block_hash = 1 [json_name="block_hash"];
    }
    

    And then regenerate the C# code (the json_name attribute is also explained in their documentation, which I linked in step 1).
    In Java, you can set preservingProtoFieldNames, which seems to be what you want, however, that is not supported in C# yet.

  3. If you are writing your own API, you could extract the block_hash field and create a BlockHash from it. I.e., something like:

    var blockHash = new BlockHash(theJson.grpc_request.block_hash)
    

I assume that it works with BloomRPC because they use the preservingProtoFieldNames setting in a language that supports it, e.g. Java.

Best regards,
Kasper

Option 1 isn’t really an option, since it is not me who is defining any of these names - I just work with what I’m getting.

So, I will check out option 2.

The preservingProtoFieldNames appears to be exactly what would solve the problem. Sad if this isn’t in C# yet.
That being said, I don’t even see why this has to be a forced thing. Keeping names as they are would be the way to go in any case - why change them.
If there were naming conflicts there would be naming conflicts in the .proto file as well, which I very much doubt that there is - I’m no proto expert but I bet it would be an invalid file if there were.

Option 3 is also not really an option, as described in the previous post - having to replicate all, most or any object from the .proto file is undesirable - if the gRPC API changes, I would have to redo all this every time, resulting in poor maintainability.

UPDATE 1

Unfortunately option 2 had no effect. The generated HashBlock object still has it’s attribute named “BlockHash_”.

I changed:

message BlockHash {
  string block_hash = 1;
}

to:

message BlockHash {
  string block_hash = 1 [json_name="block_hash"];
}

in the .proto file, cleaned and built my project, though same result.

I just found this in an article on visualstudiomagazine.com, which I guess explains why the attribute name “block_hash” can become “BlockHash_” on the BlockHash object from the .proto file.

https://visualstudiomagazine.com/articles/2020/01/06/defining-grpc-messages.aspx

The below example goes with the quote:

message CustomerResponse {
  int32 custid = 1;
  string firstName = 3;
  string lastName = 5;
}

In .NET Core, message formats are converted into classes with each field becoming a property on a class that has the same name as the message. .NET Core also converts the first character of your field names into uppercase when naming these properties. So, for example, the custId field in my previous example will become a CustId property on a CustomerResponse class in my code. Any underscores in your field names are also removed in this process and the following letter is uppercased (i.e. the Last_name field name becomes the LastName property).

This means that first VS renames the property to “BlockHash” but since this is the name of the object itself (which is reserved for class constructors) I guess it fixes that by adding an underscore resulting in “BlockHash_”. So it is Visual Studio’s own silly urge to rename things that create a conflict and renames it again to fix this self inflicted naming conflict.

I just tested this and it is in fact what is going on, because the .proto object BlockHashAndAmount also has this attribute named “block_hash” and that one becomes “BlockHash” without an tailing underscore, likely because it doesn’t conflict with the name of the constructor.

Why they are doing this is beyond me - it makes no sense. Had they kept the names as described in the first line of the quote, everything had been fine and maintainability had also been straight forward, like updating a reference to an old school soap service.

I agree that the renaming behaviour is both odd and unfortunate, but I doubt that it will change anytime soon.

Could you explain to me who will be sending you the requests to your API and why they can’t simply write BlockHash_ instead of block_hash in the grpc_request field? (This is a revised version of option 1, just using the names generated by C# instead).

I know that option 3 kind of breaks the whole idea of proto/gRPC, but it might be the best workaround for you if you can’t use the revised option 1. (And you will still be able to use the majority of the generated code).
Regarding the maintainability, I don’t think it’s going to be a major issue. We won’t change the interface frequently or anytime soon, as a lot of people (including yourself ;)) relies on it.

/ Kasper

Who will be using the API is uncertain at this point, it might become part of a solution at some point. So right now it will just be in-house people trying to incorporate Concordium communication into other systems.
So for now, it’s just myself and a few others.

And we can just write “BlockHash_” instead of “block_hash” and whatever other strange names Visual Studio thinks are more appropriate than the actual ones.

It would just have been some much nicer had we just been able to send the request objects as we know they are supposed to look, by exploring the .proto file or just looking at the sample requests that BlomRPC builds automatically.

Now I have to create instances of any object in code to see how VS has decided to rename them so we can replicate that in our calls to our own API.
And I can’t even create an instance and then just serialize it to get my sample in json, because it is changed into a 3rd name when serialized (i.e. original “block_hash” became “BlockHash_” and when serialized becomes “blockHash”).

So redundant work ftw - thank you Microsoft! :smiley:

Okay, I see.

Yes, that would be nice, but I believe the renaming is consistent with what we’ve figured out here. So the _-suffix should only be added to field if the class has the same name as the field.

I don’t know if there is another option out there. You can’t be the first one annoyed by this, so perhaps there is a workaround that we haven’t found yet.

Best of luck,
Kasper

Hi,

I have been looking at this but I’m still not sure how a method call should look.

If I wanted to replicate the following call to the client, but to the gRPC API, how would that look?

Call to client: Instantiates a piggy bank smart contract.

concordium-client_1.1.1-0 contract init 68ca306a53ea6b206a169788b1bb0e98fcc486ad36d949b6dad287df1f2dc79f --sender SysConDev-01 --contract PiggyBank --energy 1000

How would you make that same call to the gRPC API?

You would use the SendTransaction call, where the payload is a serialized account transaction with the InitContract type. The serialization is described here concordium-node/grpc-for-smart-contracts.md at main · Concordium/concordium-node · GitHub.

And the initContract specific part is under InitContract

Let me know if you need clarification on any parts of the structure/serialization of the transaction.

  • Søren

What I meant was - I have already been looking at the page you linked to, but it is still not very clear how this call should look.
Look at the object below - this is what I got out of it so far. Whether it’s correct I don’t know, but there are a few places where I have added “???” where the page describes some kind of content, but not the name of the attribute that should hold it.

{
   "Version":0,
   "Tag":0,
   "AccountTransaction":{
      "TransactionSignature":{
         "Length":0,
         "For_Each_Key_Pair_Outer???":[
            {
               "CredentialIndex":0,
               "Length":0,
               "For_Each_Key_Pair_Inner???":[
                  {
                     "KeyIndex":0,
                     "Length":0,
                     "Signature":""
                  }
               ]
            }
         ]
      },
      "TransactionHeader":{
         "AccountAddress":"",
         "Nonce":0,
         "Energy":1000,
         "PayloadSize":0,
         "TransactionExpiryTime":0
      },
      "Payload":{
         "Tag":1,
         "Content":{
            "Amount":0,
            "ModuleRef":"",
            "InitName":{
               "Length":14,
               "Name":"init_PiggyBank"
            },
            "Parameter":{
               "Length":0,
               "The_Bytes???":""
            }
         }
      }
   }
}

Also, sometimes the content of an attribute is described as “a byte array in UTF-8 encoding” (does this mean base64?) and sometimes it’s just described at “The byte array”.
Are these 2 different? - I would expect everything in regard to byte arrays to be base64 encoded strings.

It would be super helpful if you provided an example of the serialized message types on the page, since the description is somewhat vague/unclear sometimes.

Hi.

I get the impression that you’re trying to format the payload for the SendTransaction call as JSON, which might be the root of the misunderstanding. If this is completely wrong, then sorry for going down the wrong path here!

The data is supposed to be serialized as a byte array with data appended in the order specified in the document. It might be helpful to have a look at the implementation of the serialization in one of our SDK’s:

Hope this points you in the right direction.

Best regards,
Søren.

I wasn’t trying to send a JSON payload - I was just trying to describe the structure of the payload as for what I got from the documentation.
Looking at the JAVA SDK, it would seem that I’m unable to understand, what the documentation is trying to teach me.

In my country most systems in the ERP world is based on Microsoft, making many of the actors in that field Microsoft partners, which then again means that the go to language in that segment is also often Microsoft based.
Is there any plans to make a C# version of the SDK? - I’m sure that would be helpful to a lot of people, myself included.

Can anyone in here provide an example of an InitContract AccountTransaction serialized into JSON (aka something that you would be able to deserialize into an AccountTransaction object in whichever language).

The serialization description appears vague at some points which leaves the reader (me at least) unsure of how to represent certain fields/attributes.
i.e. InitContract.Parameter (has a Length attribute and then it has “the bytes”, but where do “the bytes” go?) and also the TransactionSignature has some vague descriptions as to how a serialization would look.

So again, if it is unclear what I am asking for here, it is NOT about sending JSON to the gRPC API, it is about figuring out how the request should look, since I can’t make it out from studying the description or looking in the code of the JAVA SDK.

So if anyone has code that sends the above mentioned transaction type, please try to make a JSON serialization of it, I would very much like to have a look at it.