페이지

2020년 5월 26일 화요일

Indy SDK Java 예제 빌드 및 실행

Indy SDK Java 예제 빌드 및 실행

아래 문서를 참고하여 Ubuntu에서 Indy SDK Java 예제를 빌드하고 실행하는 실습을 진행합니다.

1. 실습 환경

2. Java 빌드 프로그램 설치

  1. Java SDK를 설치합니다.

    $ sudo apt-get install openjdk-8-jdk
    
  2. Maven을 설치합니다.

    $ sudo apt-get install maven
    

3. Java 예제 빌드

  1. Indy SDK 소스를 다운로드합니다.

    $ cd $HOME
    $ git clone https://github.com/hyperledger/indy-sdk.git
    
  2. Java 예제를 빌드합니다.

    $ cd $HOME/indy-sdk/samples/java
    $ mvn package
    

4. 테스트

  1. Indy 노드 풀을 실행합니다.

    $ cd $HOME/indy-sdk
    $ docker build -f ci/indy-pool.dockerfile -t indy_pool .
    $ docker run -itd -p 9701-9708:9701-9708 indy_pool
    
  2. Java 예제를 실행합니다.

    $ mvn exec:java -Dexec.mainClass=Main
    

    정상적인 실행 결과는 아래와 같은 메시지를 출력합니다.

    Anoncreds sample -> started
    SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
    SLF4J: Defaulting to no-operation (NOP) logger implementation
    SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
    Anoncreds sample -> completed
    Anoncreds Revocation sample -> started
    Anoncreds Revocation sample -> completed
    Ledger sample -> started
    Ledger sample -> completed
    Crypto sample -> started
    Crypto sample -> completed
    Endorser sample -> started
    Endorser sample -> completed
    

    Indy 노드 풀이 실행중이 아닐 경우 아래와 같은 TimeoutException 오류 메시지가 출력됩니다.

    Anoncreds sample -> started
    SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
    SLF4J: Defaulting to no-operation (NOP) logger implementation
    SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
    [WARNING] 
    java.util.concurrent.ExecutionException: org.hyperledger.indy.sdk.ledger.TimeoutException: Timeout happens for ledger operation.
        at java.util.concurrent.CompletableFuture.reportGet (CompletableFuture.java:395)
        at java.util.concurrent.CompletableFuture.get (CompletableFuture.java:1999)
        at Anoncreds.demo (Anoncreds.java:27)
        at Main.main (Main.java:4)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke (Method.java:566)
        at org.codehaus.mojo.exec.ExecJavaMojo$1.run (ExecJavaMojo.java:282)
        at java.lang.Thread.run (Thread.java:834)
    Caused by: org.hyperledger.indy.sdk.ledger.TimeoutException: Timeout happens for ledger operation.
        at org.hyperledger.indy.sdk.IndyException.fromSdkError (IndyException.java:164)
        at org.hyperledger.indy.sdk.IndyJava$API.checkResult (IndyJava.java:92)
        at org.hyperledger.indy.sdk.pool.Pool.access$100 (Pool.java:20)
        at org.hyperledger.indy.sdk.pool.Pool$1.callback (Pool.java:52)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke (Method.java:566)
        at com.sun.jna.CallbackReference$DefaultCallbackProxy.invokeCallback (CallbackReference.java:520)
        at com.sun.jna.CallbackReference$DefaultCallbackProxy.callback (CallbackReference.java:551)
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD FAILURE
    [INFO] ------------------------------------------------------------------------
    
  3. Indy 노드 풀 주소를 TEST_POOL_ID 환경변수에 지정하여 Java 예제를 실행할 수 있습니다.

    export TEST_POOL_IP=192.168.56.110
    $ mvn exec:java -Dexec.mainClass=Main
    

    이전에 다른 주소의 Indy 노드 풀에 접근하다가 실패한 적이 있다면 위와 같이 주소를 변경하여 실행할 때 아래와 같은 PoolLedgerConfigExistsException 오류 메시지가 발생할 수 있습니다.

    Anoncreds sample -> started
    SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
    SLF4J: Defaulting to no-operation (NOP) logger implementation
    SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
    [WARNING] 
    java.util.concurrent.ExecutionException: org.hyperledger.indy.sdk.pool.PoolLedgerConfigExistsException: A pool ledger configuration already exists with the specified name.
        at java.util.concurrent.CompletableFuture.reportGet (CompletableFuture.java:395)
        at java.util.concurrent.CompletableFuture.get (CompletableFuture.java:1999)
        at utils.PoolUtils.createPoolLedgerConfig (PoolUtils.java:48)
        at Anoncreds.demo (Anoncreds.java:26)
        at Main.main (Main.java:4)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke (Method.java:566)
        at org.codehaus.mojo.exec.ExecJavaMojo$1.run (ExecJavaMojo.java:282)
        at java.lang.Thread.run (Thread.java:834)
    Caused by: org.hyperledger.indy.sdk.pool.PoolLedgerConfigExistsException: A pool ledger configuration already exists with the specified name.
        at org.hyperledger.indy.sdk.IndyException.fromSdkError (IndyException.java:162)
        at org.hyperledger.indy.sdk.IndyJava$API.checkResult (IndyJava.java:92)
        at org.hyperledger.indy.sdk.pool.Pool.access$400 (Pool.java:20)
        at org.hyperledger.indy.sdk.pool.Pool$2.callback (Pool.java:70)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke (Method.java:566)
        at com.sun.jna.CallbackReference$DefaultCallbackProxy.invokeCallback (CallbackReference.java:520)
        at com.sun.jna.CallbackReference$DefaultCallbackProxy.callback (CallbackReference.java:551)
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD FAILURE
    [INFO] ------------------------------------------------------------------------
    

    이럴 경우에는 $HOME/.indy_client 폴더를 지우고 다시 실행합니다.

Written with StackEdit.

2020년 5월 22일 금요일

Indy SDK 빌드 및 설치

Indy SDK 빌드 및 설치

Indy SDK 바이너리 파일을 설치하려면 아래의 문서를 참고하여 libindy, libnullpay, libvcx, 그리고 indy-cli를 설치할 수 있습니다.

여기서는 아래 문서들을 참고하여 Ubuntu에서 Indy SDK 소스를 빌드하고 설치하는 과정을 실습합니다.

1. 실습 환경

2. C 빌드 프로그램 설치

  1. Make를 설치합니다.

    $ sudo apt-get install make
    
  2. C/C++ 컴파일러 및 링커를 설치합니다.

    $ sudo apt-get install g++
    

3. Rust 빌드 프로그램 설치

  1. 터미널에서 아래 명령을 실행합니다.

    $ curl https://sh.rustup.rs -sSf | sh
    

    위 명령은 다음 작업을 수행합니다.

    1. 설치 스크립트 다운로드
    2. 설치 스크립트 실행
      1. rustup을 설치
      2. rustup을 사용하여 최신 버전의 Rust를 설치
      3. Rust 실행 경로 $HOME/.cargo/bin를 환경변수 PATH에 추가

    설치에 성공하면 아래의 메시지가 출력됩니다.

    Rust is installed now. Great!
    
  2. 다시 로그인하면 Rust 실행 경로 $HOME/.cargo/bin가 환경변수 PATH에 포함되어 있을 것입니다.

  3. 설치한 Rust 컴파일러의 버전을 확인합니다.

    $ rustc --version
    rustc 1.43.1 (8d69840ab 2020-05-04)
    

4. Indy SDK 빌드 및 설치

  1. 필요한 라이브러리와 도구를 설치합니다.

    $ sudo apt-get update && \
    $ sudo apt-get install -y \
       build-essential \
       pkg-config \
       cmake \
       libssl-dev \
       libsqlite3-dev \
       libzmq3-dev \
       libncursesw5-dev
    
  2. libindy가 필요로 하는 libsodium은 apt 저장소에서 제공하지 않기 때문에 소스를 다운로드하여 빌드합니다.

    $ cd /tmp && \
      curl https://download.libsodium.org/libsodium/releases/old/unsupported/libsodium-1.0.14.tar.gz | tar -xz && \
       cd /tmp/libsodium-1.0.14 && \
       ./configure --disable-shared && \
       make && \
       sudo make install && \
       rm -rf /tmp/libsodium-1.0.14
    
  3. libindy 소스를 다운로드하여 빌드합니다.

    $ cd $HOME
    $ git clone https://github.com/hyperledger/indy-sdk.git
    $ cd ./indy-sdk/libindy
    $ cargo build
    

5. 테스트

5.1 Indy 노드 풀 실행

How to start local nodes pool with docker 문서를 참고하여 Indy 노드 풀을 로컬로 실행합니다.

  1. Docker로 Indy 풀을 빌드합니다.

    $ cd $HOME/indy-sdk
    $ docker build -f ci/indy-pool.dockerfile -t indy_pool .
    Sending build context to Docker daemon  4.832GB
    Step 1/22 : FROM ubuntu:16.04
     ---> 005d2078bdfa
    Step 2/22 : ARG uid=1000
     ---> Using cache
     ---> 8e4ce2fe34c7
    Step 3/22 : RUN apt-get update -y && apt-get install -y 	git 	wget 	python3.5 	python3-pip 	python-setuptools 	python3-nacl 	apt-transport-https 	ca-certificates 	supervisor
     ---> Using cache
     ---> c4db29f53a16
    Step 4/22 : RUN pip3 install -U 	pip==9.0.3 	setuptools
     ---> Using cache
     ---> 29699d8b1bcf
    Step 5/22 : RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys CE7709D068DB5E88 || apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys CE7709D068DB5E88
     ---> Using cache
     ---> 5bc97b4e8591
    Step 6/22 : ARG indy_stream=master
     ---> Using cache
     ---> 847ea83573d3
    Step 7/22 : RUN echo "deb https://repo.sovrin.org/deb xenial $indy_stream" >> /etc/apt/sources.list
     ---> Using cache
     ---> c852d95dfaec
    Step 8/22 : RUN useradd -ms /bin/bash -u $uid indy
     ---> Using cache
     ---> 066abdcbb31e
    Step 9/22 : ARG indy_plenum_ver=1.12.1~dev989
     ---> Using cache
     ---> 321047b89616
    Step 10/22 : ARG indy_node_ver=1.12.1~dev1172
     ---> Using cache
     ---> 65a940e326d2
    Step 11/22 : ARG python3_indy_crypto_ver=0.4.5
     ---> Using cache
     ---> 2c012ba0ff4c
    Step 12/22 : ARG indy_crypto_ver=0.4.5
     ---> Using cache
     ---> c5e27ff6da07
    Step 13/22 : ARG python3_pyzmq_ver=18.1.0
     ---> Using cache
     ---> daf212ffee6f
    Step 14/22 : RUN apt-get update -y && apt-get install -y         python3-pyzmq=${python3_pyzmq_ver}         indy-plenum=${indy_plenum_ver}         indy-node=${indy_node_ver}         python3-indy-crypto=${python3_indy_crypto_ver}         libindy-crypto=${indy_crypto_ver}         vim
     ---> Using cache
     ---> dd09e2535e0e
    Step 15/22 : RUN echo "[supervisord]\nlogfile = /tmp/supervisord.log\nlogfile_maxbytes = 50MB\nlogfile_backups=10\nlogLevel = error\npidfile = /tmp/supervisord.pid\nnodaemon = true\nminfds = 1024\nminprocs = 200\numask = 022\nuser = indy\nidentifier = supervisor\ndirectory = /tmp\nnocleanup = true\nchildlogdir = /tmp\nstrip_ansi = false\n\n[program:node1]\ncommand=start_indy_node Node1 0.0.0.0 9701 0.0.0.0 9702\ndirectory=/home/indy\nstdout_logfile=/tmp/node1.log\nstderr_logfile=/tmp/node1.log\n\n[program:node2]\ncommand=start_indy_node Node2 0.0.0.0 9703 0.0.0.0 9704\ndirectory=/home/indy\nstdout_logfile=/tmp/node2.log\nstderr_logfile=/tmp/node2.log\n\n[program:node3]\ncommand=start_indy_node Node3 0.0.0.0 9705 0.0.0.0 9706\ndirectory=/home/indy\nstdout_logfile=/tmp/node3.log\nstderr_logfile=/tmp/node3.log\n\n[program:node4]\ncommand=start_indy_node Node4 0.0.0.0 9707 0.0.0.0 9708\ndirectory=/home/indy\nstdout_logfile=/tmp/node4.log\nstderr_logfile=/tmp/node4.log\n">> /etc/supervisord.conf
     ---> Using cache
     ---> a293cada0e77
    Step 16/22 : USER indy
     ---> Using cache
     ---> a80b0dfc3571
    Step 17/22 : RUN awk '{if (index($1, "NETWORK_NAME") != 0) {print("NETWORK_NAME = \"sandbox\"")} else print($0)}' /etc/indy/indy_config.py> /tmp/indy_config.py
     ---> Using cache
     ---> dcb46ff2da65
    Step 18/22 : RUN mv /tmp/indy_config.py /etc/indy/indy_config.py
     ---> Using cache
     ---> e540d82dfe32
    Step 19/22 : ARG pool_ip=127.0.0.1
     ---> Using cache
     ---> fd5c44727a33
    Step 20/22 : RUN generate_indy_pool_transactions --nodes 4 --clients 5 --nodeNum 1 2 3 4 --ips="$pool_ip,$pool_ip,$pool_ip,$pool_ip"
     ---> Running in 3290618d5b14
    Generating keys for provided seed b'000000000000000000000000000Node1'
    Init local keys for client-stack
    Public key is HXrfcFWDjWusENBoXhV8mARzq51f1npWYWaA1GzfeMDG
    Verification key is Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv
    Init local keys for node-stack
    Public key is HXrfcFWDjWusENBoXhV8mARzq51f1npWYWaA1GzfeMDG
    Verification key is Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv
    BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
    Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
    This node with name Node1 will use ports 9701 and 9702 for nodestack and clientstack respectively
    Generating keys for provided seed b'000000000000000000000000000Node2'
    Init local keys for client-stack
    Public key is Fsp2dyt7D2B4GA53hKnEmLym5Y75ExGFz2ZBzcQMNKsB
    Verification key is 8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb
    Init local keys for node-stack
    Public key is Fsp2dyt7D2B4GA53hKnEmLym5Y75ExGFz2ZBzcQMNKsB
    Verification key is 8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb
    BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
    Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
    This node with name Node2 will use ports 9703 and 9704 for nodestack and clientstack respectively
    Generating keys for provided seed b'000000000000000000000000000Node3'
    Init local keys for client-stack
    Public key is 6KTs7Q9Lng5uX6oWCkVifiJ6hSpkdHiRijAsXtAunnGN
    Verification key is DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya
    Init local keys for node-stack
    Public key is 6KTs7Q9Lng5uX6oWCkVifiJ6hSpkdHiRijAsXtAunnGN
    Verification key is DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya
    BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
    Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
    This node with name Node3 will use ports 9705 and 9706 for nodestack and clientstack respectively
    Generating keys for provided seed b'000000000000000000000000000Node4'
    Init local keys for client-stack
    Public key is ECUd5UfoYa2yUBkmxNkMbkfGKcZ8Voh5Mi3JzBwWEDpm
    Verification key is 4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA
    Init local keys for node-stack
    Public key is ECUd5UfoYa2yUBkmxNkMbkfGKcZ8Voh5Mi3JzBwWEDpm
    Verification key is 4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA
    BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
    Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
    This node with name Node4 will use ports 9707 and 9708 for nodestack and clientstack respectively
    BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
    Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
    BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
    Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
    BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
    Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
    BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
    Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
    Removing intermediate container 3290618d5b14
     ---> dc14f981b386
    Step 21/22 : EXPOSE 9701 9702 9703 9704 9705 9706 9707 9708
     ---> Running in 30903fff91f4
    Removing intermediate container 30903fff91f4
     ---> 86e1e83f19c2
    Step 22/22 : CMD ["/usr/bin/supervisord"]
     ---> Running in 7a8ade3e2407
    Removing intermediate container 7a8ade3e2407
     ---> ab8792faed23
    Successfully built ab8792faed23
    Successfully tagged indy_pool:latest
    
  2. Indy 풀을 실행합니다.

    $ docker run -itd -p 9701-9708:9701-9708 indy_pool
    e5f15ad2c43e39c5c8cf4c798445da395da0b268a32f0f8fc6f642a96adb9d18
    

5.2 테스트 시작

  1. 아래의 명령으로 테스트를 시작합니다.

    $ cd $HOME/indy-sdk/libindy
    $ RUST_TEST_THREADS=1 cargo test
    

    Indy 풀의 IP를 아래와 같이 환경변수 TEST_POOL_IP에 지정하여 실행할 수 있습니다.

    $ RUST_TEST_THREADS=1 TEST_POOL_IP=10.0.0.2 cargo test
    

5.3 Indy CLI 빌드

  1. indy-clilibindy에 의존하기 때문에 아래와 같이 libindy가 있는 위치를 지정하여 빌드합니다.

    $ cd $HOME/indy-sdk/cli/
    $ RUSTFLAGS=" -L ../libindy/target/debug" cargo build
    
  2. 환경변수 LD_LIBRARY_PATHlibindy가 있는 위치를 추가합니다.

    echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/indy-sdk/libindy/target/debug" >> ~/.bashrc
    sudo ldconfig
    source ~/.bashrc
    
  3. 아래 명령으로 indy-cli를 실행합니다.

    $ cd $HOME/indy-sdk/cli/target/debug
    $ ./indy-cli
    

Written with StackEdit.

2020년 5월 17일 일요일

Ubuntu에 Docker 설치하기

Ubuntu에 Docker 설치하기

아래 문서를 참고하여 Ubuntu에 Docker Engine과 Docker Compose를 설치합니다.

실습 환경

Docker Engine 설치

Install Docker Engine on Ubuntu 문서는 Docker 설치 방법으로 아래 세 가지를 제시하고 있습니다.

  1. Install using the repository
  2. Install from a package
  3. Install using the convenience script

여기서는 첫번째 방법으로 진행합니다.

저장소 설정

  1. apt가 HTTPS를 통하여 저장소를 사용할 수 있도록 패키지 추가

    $ sudo apt-get update
    
    $ sudo apt-get install \
        apt-transport-https \
        ca-certificates \
        curl \
        gnupg-agent \
        software-properties-common
    
  2. Docker의 GPG 키 추가

    $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    
  3. stable 저장소 설정

    $ sudo add-apt-repository \
    	"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    	$(lsb_release -cs) \
    	stable"
    

    위 명령 줄에서 $(lsb_release -cs)focal을 반환합니다. 이로 인해 명령이 실패로 끝나게 되는데 이는 Docker가 아직 focal을 지원하지 않아서 발생하는 문제입니다. 이를 해결하기 위하여 $(lsb_release -cs)을 Ubuntu 18.04 LTS의 배포명인 bionic으로 대체해서 실행합니다.

    $ sudo add-apt-repository \
    	"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    	bionic \
    	stable"
    

Docker Engine 설치

  1. Docker Engine과 containerd 최신 버전 설치

     $ sudo apt-get update
     $ sudo apt-get install docker-ce docker-ce-cli containerd.io
    

    아래 명령으로 설치한 Docker Engine의 버전을 확인합니다.

    $ docker --version
    Docker version 19.03.8, build afacb8b7f0
    

설치 후 작업

이제 Docker Engine을 설치하였습니다. 이에 더하여 Post-installation steps for Linux 문서가 제시하는 몇 가지 작업을 수행하면 Docker를 좀 더 편리하게 사용할 수 있습니다.

  • Manage Docker as a non-root user

    $ sudo usermod -aG docker $USER
    

    주의: 위의 명령에서 -a 없이 -G 옵션만 사용하면 사용자의 기존 그룹들은 모두 제거되고 새로 지정한 docker 그룹만 추가됩니다.

    가상머신 환경에서 그룹 멤버십을 적용하려면 가상머신을 다시 시작해야 합니다.

  • Configure Docker to start on boot

    $ sudo systemctl enable docker
    

Docker Compose 설치

  1. Docker Compose 설치

    sudo apt-get install docker-compose
    

    아래 명령으로 설치한 Docker Compose 버전을 확인합니다.

    $ docker-compose --version
    docker-compose version 1.17.1, build unknown
    

Written with StackEdit.

2020년 5월 15일 금요일

Hyperledger Fabric 시작하기

Hyperledger Fabric 시작하기

Windows PC에 가상머신을 생성하고 Ubuntu를 설치한 상태에서 아래 문서를 따라하면서 실습을 하였습니다.

이 문서를 작성하면서 사용한 소프트웨어 버전을 각각의 소프트웨어 이름 옆에 표시하였습니다.

실습 환경

1. 준비 사항

Git 2.17.1

sudo apt-get install git

cURL 7.58.0

sudo apt-get install curl

Docker 19.03.8

아래 문서를 참고하여 Docker를 설치합니다.

설치 방법으로 Install using the repository을 택하여 진행하였습니다.

  • SETUP THE REPOSITORY

    1. apt가 HTTPS를 통하여 저장소를 사용할 수 있도록 패키지 추가

      $ sudo apt-get update
      
      $ sudo apt-get install \
          apt-transport-https \
          ca-certificates \
          curl \
          gnupg-agent \
          software-properties-common
      
    2. Docker의 GPG 키 추가

      $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      
    3. stable 저장소 설정

      $ sudo add-apt-repository \
      	"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
      	$(lsb_release -cs) \
      	stable"
      

      위 명령 줄에서 $(lsb_release -cs)focal을 반환합니다. 이로 인해 명령이 실패로 끝나게 되는데 이는 Docker가 아직 focal을 지원하지 않아서 발생하는 문제입니다. 이를 해결하기 위하여 $(lsb_release -cs)을 Ubuntu 18.04 LTS의 배포명인 bionic으로 대체해서 실행합니다.

      $ sudo add-apt-repository \
      	"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
      	bionic \
      	stable"
      
  • INSTALL DOCKER ENGINE

    1. Docker 엔진과 containerd 최신 버전 설치

       $ sudo apt-get update
       $ sudo apt-get install docker-ce docker-ce-cli containerd.io
      

이제 Docker 엔진을 설치하였습니다. 이에 더하여 Post-installation steps for Linux 문서가 제시하는 몇 가지 작업을 수행하면 Docker를 좀 더 편리하게 사용할 수 있습니다.

  • Manage Docker as a non-root user

    $ sudo usermod -aG docker $USER
    

    주의: 위의 명령에서 -a 없이 -G 옵션만 사용하면 사용자의 기존 그룹들은 모두 제거되고 새로 지정한 docker 그룹만 추가됩니다.

    가상머신 환경에서 그룹 멤버십을 적용하려면 가상머신을 다시 시작해야 합니다.

  • Configure Docker to start on boot

    $ sudo systemctl enable docker
    

docker-compose 1.17.1

sudo apt-get install docker-compose

Go Programming Language 1.14.2

위 문서를 참고하여 아래와 같이 Go를 설치합니다.

  1. 배포 파일 go1.14.2.linux-amd64.tar.gz 다운로드

  2. 다운로드한 파일을 /usr/local 폴더에 풀기

    sudo tar -C /usr/local -xzf go$VERSION.$OS-$ARCH.tar.gz
    
  3. Go의 bin 폴더를 PATH 환경변수에 추가하기

    export PATH=$PATH:/usr/local/go/bin
    

Node.js 12.16.2 & NPM 6.14.4

위 문서를 참고하여 아래와 같이 Node.js를 설치합니다. NPM은 Node.js와 함께 설치됩니다.

  1. 배포 파일 node-v12.16.3-linux-x64.tar.xz 다운로드

  2. 다운로드한 파일을 /usr/local/lib/nodejs 폴더에 풀기

    sudo mkdir -p /usr/local/lib/nodejs
    sudo tar -xJvf node-v12.16.3-linux-x64.tar.xz -C /usr/local/lib/nodejs 
    
  3. Node.js의 bin 폴더를 PATH 환경변수에 추가하기

    export PATH=$PATH:/usr/local/lib/nodejs/node-v12.16.3-linux-x64/bin
    
  4. Fabric 코드 저장용 Go 작업공간을 위한 GOPATH 환경변수 추가

    export GOPATH=$HOME/go
    

Python 2.7

Fabric Node.js SDK는 npm install을 성공적으로 마치기 위하여 Python 2.7을 필요로 합니다.

sudo apt-get install python

2. 샘플 파일, 실행 파일, 도커 이미지 설치

1. 작업 폴더 생성

파일을 다운로드하여 저장할 폴더를 만듭니다. 폴더 이름을 아래 예시와 다르게 지어도 됩니다.

mkdir Hyperledger
cd Hyperledger

2. 스크립트 실행

아래 명령으로 스크립트 파일을 다운로드하고 실행합니다.

curl -sSL https://bit.ly/2ysbOFE | bash -s

위 명령은 아래의 작업들을 수행합니다.

  1. If needed, clone the hyperledger/fabric-samples repository
  2. Checkout the appropriate version tag
  3. Install the Hyperledger Fabric platform-specific binaries and config files for the version specified into the /bin and /config directories of fabric-samples
  4. Download the Hyperledger Fabric docker images for the version specified

실행 명령들을 실행 경로에서 찾을 수 있도록 아래 환경 변수를 설정합니다.

export PATH=$PATH:$HOME/Hyperledger/fabric-samples/bin

3. Using the Fabric test network

1. 테스트 네트워크 실행

cd $HOME/Hyperledger/fabric-samples/test-network
./network.sh up

위 명령이 정상적으로 실행되면 아래와 같은 메시지가 출력됩니다.

Starting nodes with CLI timeout of '5' tries and CLI delay of '3' seconds and using database 'leveldb' 

LOCAL_VERSION=2.1.0
DOCKER_IMAGE_VERSION=2.1.0
Creating network "net_test" with the default driver
Creating volume "net_peer0.org1.example.com" with default driver
Creating volume "net_peer0.org2.example.com" with default driver
Creating volume "net_orderer.example.com" with default driver
Creating peer0.org1.example.com ... 
Creating orderer.example.com ... 
Creating peer0.org2.example.com ... 
Creating peer0.org1.example.com
Creating orderer.example.com
Creating orderer.example.com ... done
CONTAINER ID        IMAGE                               COMMAND             CREATED             STATUS                  PORTS                              NAMES
41aa446a5e54        hyperledger/fabric-peer:latest      "peer node start"   6 seconds ago       Up 1 second             7051/tcp, 0.0.0.0:9051->9051/tcp   peer0.org2.example.com
3822795b91e4        hyperledger/fabric-orderer:latest   "orderer"           6 seconds ago       Up Less than a second   0.0.0.0:7050->7050/tcp             orderer.example.com
979efa6ddbc5        hyperledger/fabric-peer:latest      "peer node start"   6 seconds ago       Up 1 second             0.0.0.0:7051->7051/tcp             peer0.org1.example.com

2. 채널 생성

./network.sh createChannel

위 명령을 수행하면 이름이 mychannel인 채널을 생성합니다. 출력 메시지는 아래와 같습니다.

Creating channel 'mychannel'.

If network is not up, starting nodes with CLI timeout of '5' tries and CLI delay of '3' seconds and using database 'leveldb 

### Generating channel configuration transaction 'mychannel.tx' ###
+ configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel
2020-05-05 21:46:11.150 KST [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-05 21:46:11.222 KST [common.tools.configtxgen.localconfig] Load -> INFO 002 Loaded configuration: /home/trvoid/Hyperledger/fabric-samples/test-network/configtx/configtx.yaml
2020-05-05 21:46:11.222 KST [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 003 Generating new channel configtx
2020-05-05 21:46:11.225 KST [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 004 Writing new channel tx
+ res=0
+ set +x

### Generating channel configuration transaction 'mychannel.tx' ###
#######    Generating anchor peer update for Org1MSP  ##########
+ configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP
2020-05-05 21:46:11.285 KST [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-05 21:46:11.354 KST [common.tools.configtxgen.localconfig] Load -> INFO 002 Loaded configuration: /home/trvoid/Hyperledger/fabric-samples/test-network/configtx/configtx.yaml
2020-05-05 21:46:11.355 KST [common.tools.configtxgen] doOutputAnchorPeersUpdate -> INFO 003 Generating anchor peer update
2020-05-05 21:46:11.356 KST [common.tools.configtxgen] doOutputAnchorPeersUpdate -> INFO 004 Writing anchor peer update
+ res=0
+ set +x

#######    Generating anchor peer update for Org2MSP  ##########
+ configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP
2020-05-05 21:46:11.419 KST [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-05 21:46:11.494 KST [common.tools.configtxgen.localconfig] Load -> INFO 002 Loaded configuration: /home/trvoid/Hyperledger/fabric-samples/test-network/configtx/configtx.yaml
2020-05-05 21:46:11.494 KST [common.tools.configtxgen] doOutputAnchorPeersUpdate -> INFO 003 Generating anchor peer update
2020-05-05 21:46:11.497 KST [common.tools.configtxgen] doOutputAnchorPeersUpdate -> INFO 004 Writing anchor peer update
+ res=0
+ set +x

Creating channel mychannel
Using organization 1
+ peer channel create -o localhost:7050 -c mychannel --ordererTLSHostnameOverride orderer.example.com -f ./channel-artifacts/mychannel.tx --outputBlock ./channel-artifacts/mychannel.block --tls true --cafile /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
+ res=0
+ set +x
2020-05-05 21:46:14.834 KST [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-05-05 21:46:14.895 KST [cli.common] readBlock -> INFO 002 Expect block, but got status: &{NOT_FOUND}
2020-05-05 21:46:14.898 KST [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2020-05-05 21:46:15.101 KST [cli.common] readBlock -> INFO 004 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-05-05 21:46:15.110 KST [channelCmd] InitCmdFactory -> INFO 005 Endorser and orderer connections initialized
2020-05-05 21:46:15.311 KST [cli.common] readBlock -> INFO 006 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-05-05 21:46:15.315 KST [channelCmd] InitCmdFactory -> INFO 007 Endorser and orderer connections initialized
2020-05-05 21:46:15.519 KST [cli.common] readBlock -> INFO 008 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-05-05 21:46:15.537 KST [channelCmd] InitCmdFactory -> INFO 009 Endorser and orderer connections initialized
2020-05-05 21:46:15.740 KST [cli.common] readBlock -> INFO 00a Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-05-05 21:46:15.758 KST [channelCmd] InitCmdFactory -> INFO 00b Endorser and orderer connections initialized
2020-05-05 21:46:15.987 KST [cli.common] readBlock -> INFO 00c Received block: 0

===================== Channel 'mychannel' created ===================== 

Join Org1 peers to the channel...
Using organization 1
+ peer channel join -b ./channel-artifacts/mychannel.block
+ res=0
+ set +x
2020-05-05 21:46:19.407 KST [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-05-05 21:46:19.478 KST [channelCmd] executeJoin -> INFO 002 Successfully submitted proposal to join channel

Join Org2 peers to the channel...
Using organization 2
+ peer channel join -b ./channel-artifacts/mychannel.block
+ res=0
+ set +x
2020-05-05 21:46:22.752 KST [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-05-05 21:46:22.834 KST [channelCmd] executeJoin -> INFO 002 Successfully submitted proposal to join channel

Updating anchor peers for org1...
Using organization 1
+ peer channel update -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com -c mychannel -f ./channel-artifacts/Org1MSPanchors.tx --tls true --cafile /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
+ res=0
+ set +x
2020-05-05 21:46:25.969 KST [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-05-05 21:46:25.990 KST [channelCmd] update -> INFO 002 Successfully submitted channel update
===================== Anchor peers updated for org 'Org1MSP' on channel 'mychannel' ===================== 

Updating anchor peers for org2...
Using organization 2
+ peer channel update -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com -c mychannel -f ./channel-artifacts/Org2MSPanchors.tx --tls true --cafile /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
+ res=0
+ set +x
2020-05-05 21:46:32.147 KST [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-05-05 21:46:32.185 KST [channelCmd] update -> INFO 002 Successfully submitted channel update
===================== Anchor peers updated for org 'Org2MSP' on channel 'mychannel' ===================== 


========= Channel successfully joined =========== 

3. 체인코드 배포

./network.sh deployCC

위 명령은 fabcar 체인코드를 peer0.org1.example.compeer0.org2.example.com에 설치하고 mychannel 채널에 배포합니다. 출력 메시지는 아래와 같습니다.

deploying chaincode on channel 'mychannel'

Vendoring Go dependencies ...
~/Hyperledger/fabric-samples/chaincode/fabcar/go ~/Hyperledger/fabric-samples/test-network
go: downloading github.com/hyperledger/fabric-contract-api-go v1.0.0
go: downloading github.com/xeipuuv/gojsonschema v1.2.0
go: downloading github.com/hyperledger/fabric-chaincode-go v0.0.0-20200128192331-2d899240a7ed
go: downloading github.com/gobuffalo/packr v1.30.1
go: downloading github.com/go-openapi/spec v0.19.4
go: downloading github.com/hyperledger/fabric-protos-go v0.0.0-20200124220212-e9cfc186ba7b
go: downloading github.com/gobuffalo/envy v1.7.0
go: downloading github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415
go: downloading github.com/go-openapi/jsonreference v0.19.2
go: downloading github.com/rogpeppe/go-internal v1.3.0
go: downloading github.com/gobuffalo/packd v0.3.0
go: downloading github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f
go: downloading github.com/go-openapi/swag v0.19.5
go: downloading github.com/PuerkitoBio/purell v1.1.1
go: downloading github.com/joho/godotenv v1.3.0
go: downloading github.com/go-openapi/jsonpointer v0.19.3
go: downloading golang.org/x/text v0.3.2
go: downloading github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e
go: downloading gopkg.in/yaml.v2 v2.2.2
go: downloading github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578
go: downloading github.com/golang/protobuf v1.3.2
go: downloading google.golang.org/grpc v1.23.0
go: downloading golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297
go: downloading google.golang.org/genproto v0.0.0-20180831171423-11092d34479b
go: downloading golang.org/x/sys v0.0.0-20190710143415-6ec70d6a5542
~/Hyperledger/fabric-samples/test-network
Finished vendoring Go dependencies
Using organization 1
++ peer lifecycle chaincode package fabcar.tar.gz --path ../chaincode/fabcar/go/ --lang golang --label fabcar_1
++ res=0
++ set +x
===================== Chaincode is packaged on peer0.org1 ===================== 

Installing chaincode on peer0.org1...
Using organization 1
++ peer lifecycle chaincode install fabcar.tar.gz
++ res=0
++ set +x
2020-05-05 21:49:02.614 KST [cli.lifecycle.chaincode] submitInstallProposal -> INFO 001 Installed remotely: response:<status:200 payload:"\nIfabcar_1:65710fa851d5c73690faa4709ef40b798c085e7210c46d44f8b1e2d5a062c9b0\022\010fabcar_1" > 
2020-05-05 21:49:02.614 KST [cli.lifecycle.chaincode] submitInstallProposal -> INFO 002 Chaincode code package identifier: fabcar_1:65710fa851d5c73690faa4709ef40b798c085e7210c46d44f8b1e2d5a062c9b0
===================== Chaincode is installed on peer0.org1 ===================== 

Install chaincode on peer0.org2...
Using organization 2
++ peer lifecycle chaincode install fabcar.tar.gz
++ res=0
++ set +x
2020-05-05 21:49:36.021 KST [cli.lifecycle.chaincode] submitInstallProposal -> INFO 001 Installed remotely: response:<status:200 payload:"\nIfabcar_1:65710fa851d5c73690faa4709ef40b798c085e7210c46d44f8b1e2d5a062c9b0\022\010fabcar_1" > 
2020-05-05 21:49:36.021 KST [cli.lifecycle.chaincode] submitInstallProposal -> INFO 002 Chaincode code package identifier: fabcar_1:65710fa851d5c73690faa4709ef40b798c085e7210c46d44f8b1e2d5a062c9b0
===================== Chaincode is installed on peer0.org2 ===================== 

Using organization 1
++ peer lifecycle chaincode queryinstalled
++ res=0
++ set +x
Installed chaincodes on peer:
Package ID: fabcar_1:65710fa851d5c73690faa4709ef40b798c085e7210c46d44f8b1e2d5a062c9b0, Label: fabcar_1
PackageID is fabcar_1:65710fa851d5c73690faa4709ef40b798c085e7210c46d44f8b1e2d5a062c9b0
===================== Query installed successful on peer0.org1 on channel ===================== 

Using organization 1
++ peer lifecycle chaincode approveformyorg -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls true --cafile /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --version 1 --init-required --package-id fabcar_1:65710fa851d5c73690faa4709ef40b798c085e7210c46d44f8b1e2d5a062c9b0 --sequence 1
++ set +x
2020-05-05 21:49:38.331 KST [chaincodeCmd] ClientWait -> INFO 001 txid [a122868923412b668458f936bc571db608cbd31e057fbb1556404ee63f175726] committed with status (VALID) at 
===================== Chaincode definition approved on peer0.org1 on channel 'mychannel' ===================== 

Using organization 1
===================== Checking the commit readiness of the chaincode definition on peer0.org1 on channel 'mychannel'... ===================== 
Attempting to check the commit readiness of the chaincode definition on peer0.org1 secs
++ peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1 --output json --init-required
++ res=0
++ set +x
{
	"approvals": {
		"Org1MSP": true,
		"Org2MSP": false
	}
}
===================== Checking the commit readiness of the chaincode definition successful on peer0.org1 on channel 'mychannel' ===================== 
Using organization 2
===================== Checking the commit readiness of the chaincode definition on peer0.org2 on channel 'mychannel'... ===================== 
Attempting to check the commit readiness of the chaincode definition on peer0.org2 secs
++ peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1 --output json --init-required
++ res=0
++ set +x
{
	"approvals": {
		"Org1MSP": true,
		"Org2MSP": false
	}
}
===================== Checking the commit readiness of the chaincode definition successful on peer0.org2 on channel 'mychannel' ===================== 
Using organization 2
++ peer lifecycle chaincode approveformyorg -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls true --cafile /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --version 1 --init-required --package-id fabcar_1:65710fa851d5c73690faa4709ef40b798c085e7210c46d44f8b1e2d5a062c9b0 --sequence 1
++ set +x
2020-05-05 21:49:47.033 KST [chaincodeCmd] ClientWait -> INFO 001 txid [f5433e77e5894194a92abb51809562e09a75480fd43e02c12e83aececd0ec2af] committed with status (VALID) at 
===================== Chaincode definition approved on peer0.org2 on channel 'mychannel' ===================== 

Using organization 1
===================== Checking the commit readiness of the chaincode definition on peer0.org1 on channel 'mychannel'... ===================== 
Attempting to check the commit readiness of the chaincode definition on peer0.org1 secs
++ peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1 --output json --init-required
++ res=0
++ set +x
{
	"approvals": {
		"Org1MSP": true,
		"Org2MSP": true
	}
}
===================== Checking the commit readiness of the chaincode definition successful on peer0.org1 on channel 'mychannel' ===================== 
Using organization 2
===================== Checking the commit readiness of the chaincode definition on peer0.org2 on channel 'mychannel'... ===================== 
Attempting to check the commit readiness of the chaincode definition on peer0.org2 secs
++ peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1 --output json --init-required
++ res=0
++ set +x
{
	"approvals": {
		"Org1MSP": true,
		"Org2MSP": true
	}
}
===================== Checking the commit readiness of the chaincode definition successful on peer0.org2 on channel 'mychannel' ===================== 
Using organization 1
Using organization 2
++ peer lifecycle chaincode commit -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls true --cafile /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --peerAddresses localhost:7051 --tlsRootCertFiles /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses localhost:9051 --tlsRootCertFiles /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt --version 1 --sequence 1 --init-required
++ res=0
++ set +x
2020-05-05 21:49:57.343 KST [chaincodeCmd] ClientWait -> INFO 001 txid [dad109dc9f24d8bd22d05ffdfb830a35573cc1775d2021f8ffc898c7bb32705b] committed with status (VALID) at localhost:9051
2020-05-05 21:49:57.346 KST [chaincodeCmd] ClientWait -> INFO 002 txid [dad109dc9f24d8bd22d05ffdfb830a35573cc1775d2021f8ffc898c7bb32705b] committed with status (VALID) at localhost:7051
===================== Chaincode definition committed on channel 'mychannel' ===================== 

Using organization 1
===================== Querying chaincode definition on peer0.org1 on channel 'mychannel'... ===================== 
Attempting to Query committed status on peer0.org1, Retry after 3 seconds.
++ peer lifecycle chaincode querycommitted --channelID mychannel --name fabcar
++ res=0
++ set +x

Committed chaincode definition for chaincode 'fabcar' on channel 'mychannel':
Version: 1, Sequence: 1, Endorsement Plugin: escc, Validation Plugin: vscc, Approvals: [Org1MSP: true, Org2MSP: true]
===================== Query chaincode definition successful on peer0.org1 on channel 'mychannel' ===================== 

Using organization 2
===================== Querying chaincode definition on peer0.org2 on channel 'mychannel'... ===================== 
Attempting to Query committed status on peer0.org2, Retry after 3 seconds.
++ peer lifecycle chaincode querycommitted --channelID mychannel --name fabcar
++ res=0
++ set +x

Committed chaincode definition for chaincode 'fabcar' on channel 'mychannel':
Version: 1, Sequence: 1, Endorsement Plugin: escc, Validation Plugin: vscc, Approvals: [Org1MSP: true, Org2MSP: true]
===================== Query chaincode definition successful on peer0.org2 on channel 'mychannel' ===================== 

Using organization 1
Using organization 2
++ peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls true --cafile /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n fabcar --peerAddresses localhost:7051 --tlsRootCertFiles /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses localhost:9051 --tlsRootCertFiles /home/trvoid/Hyperledger/fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt --isInit -c '{"function":"initLedger","Args":[]}'
++ res=0
++ set +x
2020-05-05 21:50:03.941 KST [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 001 Chaincode invoke successful. result: status:200 
===================== Invoke transaction successful on peer0.org1 peer0.org2 on channel 'mychannel' ===================== 

Querying chaincode on peer0.org1...
Using organization 1
===================== Querying on peer0.org1 on channel 'mychannel'... ===================== 
Attempting to Query peer0.org1 ...1588683016 secs
++ peer chaincode query -C mychannel -n fabcar -c '{"Args":["queryAllCars"]}'
++ res=0
++ set +x

[{"Key":"CAR0","Record":{"make":"Toyota","model":"Prius","colour":"blue","owner":"Tomoko"}},{"Key":"CAR1","Record":{"make":"Ford","model":"Mustang","colour":"red","owner":"Brad"}},{"Key":"CAR2","Record":{"make":"Hyundai","model":"Tucson","colour":"green","owner":"Jin Soo"}},{"Key":"CAR3","Record":{"make":"Volkswagen","model":"Passat","colour":"yellow","owner":"Max"}},{"Key":"CAR4","Record":{"make":"Tesla","model":"S","colour":"black","owner":"Adriana"}},{"Key":"CAR5","Record":{"make":"Peugeot","model":"205","colour":"purple","owner":"Michel"}},{"Key":"CAR6","Record":{"make":"Chery","model":"S22L","colour":"white","owner":"Aarav"}},{"Key":"CAR7","Record":{"make":"Fiat","model":"Punto","colour":"violet","owner":"Pari"}},{"Key":"CAR8","Record":{"make":"Tata","model":"Nano","colour":"indigo","owner":"Valeria"}},{"Key":"CAR9","Record":{"make":"Holden","model":"Barina","colour":"brown","owner":"Shotaro"}}]
===================== Query successful on peer0.org1 on channel 'mychannel' =====================

4. 체인코드 호출

peer 명령을 사용하여 아래 작업들을 수행할 수 있습니다.

  • invoke deployed smart contracts
  • update channels
  • install and deploy new smart contracts

peer 명령을 실행 경로에서 찾을 수 있도록 아래 환경 변수를 설정합니다.

export PATH=${PWD}/../bin:${PWD}:$PATH

core.yaml 파일을 찾을 수 있도록 아래 환경 변수를 설정합니다.

export FABRIC_CFG_PATH=$PWD/../config/

peer CLI를 Org1으로 동작시키기 위하여 아래 환경 변수들을 설정합니다.

# Environment variables for Org1

export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
export CORE_PEER_ADDRESS=localhost:7051

아래 명령을 사용하여 채널에서 자동차 목록을 가져옵니다.

peer chaincode query -C mychannel -n fabcar -c '{"Args":["queryAllCars"]}'

자동차 소유자를 변경하기 위하여 아래 명령을 실행합니다.

peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls true --cafile ${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n fabcar --peerAddresses localhost:7051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses localhost:9051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"function":"changeCarOwner","Args":["CAR9","Dave"]}'

peer CLI를 Org2로 동작시키고자 한다면 아래 환경 변수들을 설정합니다.

# Environment variables for Org2

export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_LOCALMSPID="Org2MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
export CORE_PEER_ADDRESS=localhost:9051

5. 테스트 네트워크 종료

아래 명령으로 테스트 네트워크를 내릴 수 있습니다.

./network.sh down

위 명령은 아래 항목들을 제거합니다.

  • the node and chaincode containers
  • the organization crypto material
  • the chaincode images from your Docker Registry
  • the channel artifacts and docker volumes

6. 다음 단계

  • Bring up the network with Certificate Authorities
  • What’s happening behind the scenes?
  • Troubleshooting

Written with StackEdit.

국어 맞춤법 참고 자료

  제목 설명(인용) 출처 IT 글쓰기와 번역 노트 IT 기술 문서 및 서적을 집필/번역/교정하면서 얻은 경험/정보/지식을 공유합니다. 전뇌해커 [우리말 바루기] ‘대로’의 띄어쓰기 명사 뒤에서는 붙여 쓰고, 그 외에는 띄어 쓴다고 생각하면 쉽다. 다...