Internal message

While traversing the transactions using:

    let mut concordium_client = Client::new(endpoint).await?;
    let mut receiver = concordium_client.get_finalized_blocks().await?;
    while let Some(v) = receiver.next().await {
        let block_hash = v?.block_hash;
        println!("Blockhash: {:?}", block_hash);
        let transactions = concordium_client
            .get_block_transaction_events(block_hash)
            .await?
            .response;
        for result in process(transactions).await.iter() {
            println!("address: {}, amount: {}", result.address, result.amount);
            for device in database_connection
                .prepared
                .get_devices_from_account(result.address)
                .await?
                .iter()
            {
                gcloud
                    .send_push_notification(device, result.to_owned())
                    .await?;
            }
        }
    }

or concordium-misc-tools/notification-server/src/bin/service.rs at main · Concordium/concordium-misc-tools · GitHub

I am after around 30 blocks traversed getting the following error:

Error: status: Internal, message: “h2 protocol error: error reading a body from connection: stream error received: unexpected internal error encountered”, details: , metadata: MetadataMap { headers: {} }

I am using testnet grpc as the information collector

And when querying the reporter nodes it seems to work just fine

Error: RPC error: Call failed: status: Unknown, message: “transport error”, details: , metadata: MetadataMap { headers: {} }

Caused by:
0: Call failed: status: Unknown, message: “transport error”, details: , metadata: MetadataMap { headers: {} }
1: status: Unknown, message: “transport error”, details: , metadata: MetadataMap { headers: {} }
2: transport error
3: operation was canceled: connection closed
4: connection closed

Related issue seem to occur on reporter nodes, however the amount of blocks traversed is around a factor 50

We should gracefully handle connection terminated which may occur on the loadbalancer.

For the the aws loadbalancer this the field which ensures connections are eventually terminated:

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb#client_keep_alive

Discussions verbally concluded that making such retry may involve either configuration modifications or handle retry on the application side.

While this may be a good idea, it is a bigger change. This concludes that such issue for now would have to be handled by the caller of the sdk