Error loading our PeerID from IPFS

We running node and ipfs via this guide:

After switching to AWS S3 js-ceramic returns this log:

[2022-09-28T08:53:14.253Z] IMPORTANT: 'Starting Ceramic Daemon at version 2.6.0 with config: \n' +
  '{\n' +
  '  "anchor": {},\n' +
  '  "http-api": {\n' +
  '    "cors-allowed-origins": [\n' +
  '      ".*"\n' +
  '    ]\n' +
  '  },\n' +
  '  "ipfs": {\n' +
  '    "mode": "remote",\n' +
  '    "host": "http://0.0.0.0:5001"\n' +
  '  },\n' +
  '  "logger": {\n' +
  '    "log-level": 0,\n' +
  '    "log-to-files": false\n' +
  '  },\n' +
  '  "metrics": {\n' +
  '    "metrics-exporter-enabled": false,\n' +
  '    "metrics-port": 9090\n' +
  '  },\n' +
  '  "network": {\n' +
  '    "name": "testnet-clay"\n' +
  '  },\n' +
  '  "node": {},\n' +
  '  "state-store": {\n' +
  '    "mode": "s3",\n' +
  '    "s3-bucket": "s3-ceramic"\n' +
  '  }\n' +
  '}'
[2022-09-28T08:53:14.283Z] IMPORTANT: "Connecting to ceramic network 'testnet-clay' using pubsub topic '/ceramic/testnet-clay'"
[2022-09-28T08:53:14.283Z] INFO: 'Performing periodic reconnection to bootstrap peers'
[2022-09-28T08:53:14.293Z] WARNING: 'Error loading our PeerID from IPFS: FetchError: request to http://0.0.0.0:5001/api/v0/id failed, reason: connect ECONNREFUSED 0.0.0.0:5001. Skipping connectieers'

/js-ceramic/node_modules/rxjs/dist/cjs/internal/util/reportUnhandledError.js:13
            throw err;
            ^
FetchError: request to http://0.0.0.0:5001/api/v0/id failed, reason: connect ECONNREFUSED 0.0.0.0:5001
    at ClientRequest.<anonymous> (/js-ceramic/node_modules/node-fetch/lib/index.js:1461:11)
    at ClientRequest.emit (node:events:513:28)
    at ClientRequest.emit (node:domain:489:12)
    at Socket.socketErrorListener (node:_http_client:481:9)
    at Socket.emit (node:events:513:28)
    at Socket.emit (node:domain:489:12)
    at emitErrorNT (node:internal/streams/destroy:157:8)
    at emitErrorCloseNT (node:internal/streams/destroy:122:3)
    at processTicksAndRejections (node:internal/process/task_queues:83:21) {
  type: 'system',
  errno: 'ECONNREFUSED',
  code: 'ECONNREFUSED'

Docker logs ipfs returns this:

ipfs version 0.12.0
Found IPFS fs-repo at /data/ipfs
Initializing daemon...
go-ipfs version: 0.12.0-06191df-dirty
Repo version: 12
System version: amd64/linux
Golang version: go1.16.7
2022/09/28 08:40:37 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/127.0.0.1/tcp/8081/ws
Swarm listening on /ip4/172.17.0.2/tcp/4001
Swarm listening on /ip4/172.17.0.2/tcp/8081/ws
Swarm listening on /p2p-circuit
Swarm announcing /ip4/127.0.0.1/tcp/4001
Swarm announcing /ip4/127.0.0.1/tcp/8081/ws
Swarm announcing /ip4/172.17.0.2/tcp/4001
Swarm announcing /ip4/172.17.0.2/tcp/8081/ws
Healthcheck server listening on port 8011
API server listening on /ip4/0.0.0.0/tcp/5001
WebUI: http://0.0.0.0:5001/webui
Gateway (readonly) server listening on /ip4/0.0.0.0/tcp/8080
Daemon is ready

We checked API - it returns 200

curl  -X POST http://0.0.0.0:5001/api/v0/bitswap/reprovide -v
*   Trying 0.0.0.0:5001...
* TCP_NODELAY set
* Connected to 0.0.0.0 (127.0.0.1) port 5001 (#0)
> POST /api/v0/bitswap/reprovide HTTP/1.1
> Host: 0.0.0.0:5001
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Access-Control-Allow-Headers: X-Stream-Output, X-Chunked-Output, X-Content-Length
< Access-Control-Expose-Headers: X-Stream-Output, X-Chunked-Output, X-Content-Length
< Content-Type: application/json
< Server: go-ipfs/0.12.0
< Trailer: X-Stream-Error
< Vary: Origin
< X-Chunked-Output: 1
< Date: Wed, 28 Sep 2022 09:07:38 GMT
< Transfer-Encoding: chunked
<
* Connection #0 to host 0.0.0.0 left intact

Ok, I see. Can you run this command from the host machine?

curl -X POST http://0.0.0.0:5001/api/v0/id
{`

"ID":"QmXdQsAT6JLegPwWYUvkzFhQiGEA4PEBQXpYB26ahwyjJk","PublicKey":"CAASpAIwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQDlCcDNgA0tFwLURG6UekUt6mluexJvAx4G7onyIV6GcqS+Zne995KwmE8ZX5XUHpMQ4y4sndCvfs2r5i3ULuTYZkkrgwqwk+5m5PJ5rOWAhBhqLkZ7tdb5Ie/ZM6Jdm4m624dUsTR6gk3Hzk2xl5j3rXaab22VfusA8Zj0a0Tex9omfSDwNj7+ZUiIeeF18ocHxCbQmtGi9Z4YnLLn6uG6q7NnE9Ixi/OqW+N03jmdOv6iKZywhMcQ53+icogNfC8o/Aj4giHx2mwspnyp4Fxy0cpI1hWQtrAqvdFkSsnd0TE3xws/vnDWfvIhyaLvF3774jr4rWkC/5Lrz7E5y/GrAgED","Addresses":["/ip4/127.0.0.1/tcp/4001/p2p/QmXdQsAT6JLegPwWYUvkzFhQiGEA4PEBQXpYB26ahwyjJk","/ip4/127.0.0.1/tcp/8081/ws/p2p/QmXdQsAT6JLegPwWYUvkzFhQiGEA4PEBQXpYB26ahwyjJk","/ip4/172.17.0.2/tcp/4001/p2p/QmXdQsAT6JLegPwWYUvkzFhQiGEA4PEBQXpYB26ahwyjJk","/ip4/172.17.0.2/tcp/8081/ws/p2p/QmXdQsAT6JLegPwWYUvkzFhQiGEA4PEBQXpYB26ahwyjJk","/ip4/OUR_IP/tcp/50050/p2p/QmXdQsAT6JLegPwWYUvkzFhQiGEA4PEBQXpYB26ahwyjJk"],"AgentVersion":"go-ipfs/0.12.0/06191df-dirty/docker","ProtocolVersion":"ipfs/0.1.0","Protocols":["/floodsub/1.0.0","/ipfs/bitswap","/ipfs/bitswap/1.0.0","/ipfs/bitswap/1.1.0","/ipfs/bitswap/1.2.0","/ipfs/id/1.0.0","/ipfs/id/push/1.0.0","/ipfs/lan/kad/1.0.0","/ipfs/ping/1.0.0","/libp2p/autonat/1.0.0","/libp2p/circuit/relay/0.1.0","/libp2p/circuit/relay/0.2.0/stop","/meshsub/1.0.0","/meshsub/1.1.0","/p2p/id/delta/1.0.0","/x/"]}

`

Ok, can you try running the following command?

docker network create ceramic

After that can you stop your Ceramic and IPFS containers (docker rm -f ...) and rerun them, both including this flag: --network ceramic? For example:

docker run -d \
  --network ceramic -p 7007:7007 \
  -v /path_for_daemon_config:/root/.ceramic/daemon.config.json \
  -v /path_for_ceramic_logs:/root/.ceramic/logs \
  -e NODE_ENV=production \
  -e AWS_ACCESS_KEY_ID=s3_access_key_id \
  -e AWS_SECRET_ACCESS_KEY=s3_secret_access_key \
  --name js-ceramic \
  ceramicnetwork/js-ceramic:latest
docker run \
  --network ceramic -p 5001:5001 \ # API port
  -p 8011:8011 \ # Healthcheck port
  -v /path_on_volume_for_ipfs_repo:/data/ipfs \
  -e IPFS_ENABLE_S3=true \
  -e IPFS_S3_REGION=region \
  -e IPFS_S3_BUCKET_NAME=bucket_name \
  -e IPFS_S3_ROOT_DIRECTORY=root_directory \
  -e IPFS_S3_ACCESS_KEY_ID=aws_access_key_id \
  -e IPFS_S3_SECRET_ACCESS_KEY=aws_secret_access_key \
  -e IPFS_S3_KEY_TRANSFORM=next-to-last/2 \ # Sharding method
  --name ipfs \
  go-ipfs-daemon