Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning...

53
Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Transcript of Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning...

Page 1: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Reinforcement Learning China Summer School

Imitation Learning

Yang Yu Nanjing Univeristy

Aug. 1, 2020

Page 2: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Previously

• Value-based Reinforcement Learning• Policy-based RL and RL Theory• Optimisation in Learning• Model-based Reinforcement Learning• Control as Inference

Page 3: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Position of RL in AI

Machine Learning

Supervised Learning Reinforcement Learning

Unsupervised Learning

Artificial Intelligence

Page 4: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Supervised learning

f(x) = y<latexit sha1_base64="9jPz2PUf1tuHJS8rFeiZHUtlw/s=">AAADH3icZZJNb9NAEIa3Lh/FfLVw5GJhIaUSRHYo4oTUCpA4FomklWwrWm/Gicl+WLtr6mi1/4Er3Pg13BDX/hvWbig4GWmlmWfeeWcPk1e0VDqKLne83Rs3b93eu+PfvXf/wcP9g0cTJWpJYEwEFfI8xwpoyWGsS03hvJKAWU7hLF++bftnX0CqUvBPelVBxvCcl0VJsHZoUgyawzer6X4YDaMugu0kXichWsfp9MDbTWeC1Ay4JhQrlcRRpTODpS4JBeuntYIKkyWeQ+JSjhmozHTftcEzR2ZBIaR7XAcd7U1Us0JDk5m5xNWiJE3fjzBo4sgtY4phveg1Ta1FVeI+c0K1YvkWbLerPm0NtRBUbYn1gvVZvlFjOheydLLND//rWN9Przco0EYtxIXgdCWhUM/bgmFeY6rxXLVaDhdEMMdmJoWmAqJtMsqSzPiBC9N55bl5b6cmjG2SLkHyF9EwfgUsCcJRkF2TI2CZ7aZs35b89Y2z//wGQRgHhxvSE3ulKCRemhNrfXc28eaRbCeT0TB+ORx9PAqP360PaA89QU/RAMXoNTpGH9ApGiOCPqOv6Bv67v3wfnq/vN9XUm9nPfMY9cK7/AOKywK6</latexit>

apple

x

<latexit sha1_base64="g3vERbzU3kfXibT9Kek4KMf3Qbg=">AAAB6HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQL0FvXhMwDwgWcLspDcZMzu7zMyKIeQLvHhQxKuf5M2/cZLsQRMLGoqqbrq7gkRwbVz328mtrW9sbuW3Czu7e/sHxcOjpo5TxbDBYhGrdkA1Ci6xYbgR2E4U0igQ2ApGtzO/9YhK81jem3GCfkQHkoecUWOl+lOvWHLL7hxklXgZKUGGWq/41e3HLI1QGiao1h3PTYw/ocpwJnBa6KYaE8pGdIAdSyWNUPuT+aFTcmaVPgljZUsaMld/T0xopPU4CmxnRM1QL3sz8T+vk5rwyp9wmaQGJVssClNBTExmX5M+V8iMGFtCmeL2VsKGVFFmbDYFG4K3/PIqaV6UvUr5ul4pVW+yOPJwAqdwDh5cQhXuoAYNYIDwDK/w5jw4L86787FozTnZzDH8gfP5A+oZjQg=</latexit>

y

<latexit sha1_base64="t5lWBlxnhPc8vatgOCKMO4kpbDU=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmpN+ueJW3TnIKvFyUoEcjX75qzeIWRqhNExQrbuemxg/o8pwJnBa6qUaE8rGdIhdSyWNUPvZ/NApObPKgISxsiUNmau/JzIaaT2JAtsZUTPSy95M/M/rpia88jMuk9SgZItFYSqIicnsazLgCpkRE0soU9zeStiIKsqMzaZkQ/CWX14l7YuqV6teN2uV+k0eRxFO4BTOwYNLqMMdNKAFDBCe4RXenEfnxXl3PhatBSefOYY/cD5/AOudjQk=</latexit>

Instance Label

function model

Learn a model to fit the data

Page 5: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Supervised learning

f(x) = y<latexit sha1_base64="9jPz2PUf1tuHJS8rFeiZHUtlw/s=">AAADH3icZZJNb9NAEIa3Lh/FfLVw5GJhIaUSRHYo4oTUCpA4FomklWwrWm/Gicl+WLtr6mi1/4Er3Pg13BDX/hvWbig4GWmlmWfeeWcPk1e0VDqKLne83Rs3b93eu+PfvXf/wcP9g0cTJWpJYEwEFfI8xwpoyWGsS03hvJKAWU7hLF++bftnX0CqUvBPelVBxvCcl0VJsHZoUgyawzer6X4YDaMugu0kXichWsfp9MDbTWeC1Ay4JhQrlcRRpTODpS4JBeuntYIKkyWeQ+JSjhmozHTftcEzR2ZBIaR7XAcd7U1Us0JDk5m5xNWiJE3fjzBo4sgtY4phveg1Ta1FVeI+c0K1YvkWbLerPm0NtRBUbYn1gvVZvlFjOheydLLND//rWN9Przco0EYtxIXgdCWhUM/bgmFeY6rxXLVaDhdEMMdmJoWmAqJtMsqSzPiBC9N55bl5b6cmjG2SLkHyF9EwfgUsCcJRkF2TI2CZ7aZs35b89Y2z//wGQRgHhxvSE3ulKCRemhNrfXc28eaRbCeT0TB+ORx9PAqP360PaA89QU/RAMXoNTpGH9ApGiOCPqOv6Bv67v3wfnq/vN9XUm9nPfMY9cK7/AOKywK6</latexit>

function model

Learn a model to fit the data

✓⇤ = argmin✓

X

i

kf(xi|✓)� yik+ k✓k<latexit sha1_base64="W/ccPwejLH1WMVt6ol7zYbMwnVI=">AAADYnicZZJdb9MwFIa9ho8RPtayS7iwiJA2YFVShrhC2gQILodEt0lJiBz3JLUax5HtsFapfxK/hiskuOOH4H4wSHskR8fPec97HOmkVcGU9v0fOx3nxs1bt3fvuHfv3X+w1+09PFeilhSGVBRCXqZEQcFKGGqmC7isJBCeFnCRTt4u6hdfQSomys96VkHMSV6yjFGiLUq6HyI9Bk2+PMNvcERkHnFWJiuGI1XzhEVznB1MEzZf0UN8hGcJwxY/t58VjOZJ1/P7/jLwdhKsEw+t4yzpdZxoJGjNodS0IEqFgV/puCFSM1qAcaNaQUXohOQQ2rQkHFTcLP/Y4KeWjHAmpD2lxkva6qhGmYZp3OSSVGNGp20/ymEa+HYYV5zocavY1FpUjLSZFaoZT7fgYrpq04WhFqJQW2I95m2WbtxJkQvJrGzzwf8qxnWj6wkKdKPG4kqUxUxCpl4sLpyUNSk0ydVCW8IVFdyyURPBtAKqTTiIw7hxsY1m6ZWmzXuTNF5gwmgCsjzy+8Er4CH2Bji+JsfAY7PsMm1b+tc3iP/zO8BegA83pKdmpcgkmTSnxrh2bYLNJdlOzgf94GV/8OnYO3m3XqBd9Ag9QQcoQK/RCfqIztAQUfQNfUc/0a/Ob8d1es7+StrZWffso1Y4j/8ABeAYuw==</latexit>

Find the model parameters:

Page 6: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Supervised learning

f(x) = y<latexit sha1_base64="9jPz2PUf1tuHJS8rFeiZHUtlw/s=">AAADH3icZZJNb9NAEIa3Lh/FfLVw5GJhIaUSRHYo4oTUCpA4FomklWwrWm/Gicl+WLtr6mi1/4Er3Pg13BDX/hvWbig4GWmlmWfeeWcPk1e0VDqKLne83Rs3b93eu+PfvXf/wcP9g0cTJWpJYEwEFfI8xwpoyWGsS03hvJKAWU7hLF++bftnX0CqUvBPelVBxvCcl0VJsHZoUgyawzer6X4YDaMugu0kXichWsfp9MDbTWeC1Ay4JhQrlcRRpTODpS4JBeuntYIKkyWeQ+JSjhmozHTftcEzR2ZBIaR7XAcd7U1Us0JDk5m5xNWiJE3fjzBo4sgtY4phveg1Ta1FVeI+c0K1YvkWbLerPm0NtRBUbYn1gvVZvlFjOheydLLND//rWN9Przco0EYtxIXgdCWhUM/bgmFeY6rxXLVaDhdEMMdmJoWmAqJtMsqSzPiBC9N55bl5b6cmjG2SLkHyF9EwfgUsCcJRkF2TI2CZ7aZs35b89Y2z//wGQRgHhxvSE3ulKCRemhNrfXc28eaRbCeT0TB+ORx9PAqP360PaA89QU/RAMXoNTpGH9ApGiOCPqOv6Bv67v3wfnq/vN9XUm9nPfMY9cK7/AOKywK6</latexit>

function model

Learn a model to fit the data

✓⇤ = argmin✓

X

i

kf(xi|✓)� yik+ k✓k<latexit sha1_base64="W/ccPwejLH1WMVt6ol7zYbMwnVI=">AAADYnicZZJdb9MwFIa9ho8RPtayS7iwiJA2YFVShrhC2gQILodEt0lJiBz3JLUax5HtsFapfxK/hiskuOOH4H4wSHskR8fPec97HOmkVcGU9v0fOx3nxs1bt3fvuHfv3X+w1+09PFeilhSGVBRCXqZEQcFKGGqmC7isJBCeFnCRTt4u6hdfQSomys96VkHMSV6yjFGiLUq6HyI9Bk2+PMNvcERkHnFWJiuGI1XzhEVznB1MEzZf0UN8hGcJwxY/t58VjOZJ1/P7/jLwdhKsEw+t4yzpdZxoJGjNodS0IEqFgV/puCFSM1qAcaNaQUXohOQQ2rQkHFTcLP/Y4KeWjHAmpD2lxkva6qhGmYZp3OSSVGNGp20/ymEa+HYYV5zocavY1FpUjLSZFaoZT7fgYrpq04WhFqJQW2I95m2WbtxJkQvJrGzzwf8qxnWj6wkKdKPG4kqUxUxCpl4sLpyUNSk0ydVCW8IVFdyyURPBtAKqTTiIw7hxsY1m6ZWmzXuTNF5gwmgCsjzy+8Er4CH2Bji+JsfAY7PsMm1b+tc3iP/zO8BegA83pKdmpcgkmTSnxrh2bYLNJdlOzgf94GV/8OnYO3m3XqBd9Ag9QQcoQK/RCfqIztAQUfQNfUc/0a/Ob8d1es7+StrZWffso1Y4j/8ABeAYuw==</latexit>

Find the model parameters:

training data distribution test data distribution

! "

Page 7: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Unsupervised learning

General task: discover the structure information in the data

a data set without feedback information (label)

clustering, density discovery, feature representation …

Page 8: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Unsupervised learning

self-supervised learning

x

<latexit sha1_base64="g3vERbzU3kfXibT9Kek4KMf3Qbg=">AAAB6HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQL0FvXhMwDwgWcLspDcZMzu7zMyKIeQLvHhQxKuf5M2/cZLsQRMLGoqqbrq7gkRwbVz328mtrW9sbuW3Czu7e/sHxcOjpo5TxbDBYhGrdkA1Ci6xYbgR2E4U0igQ2ApGtzO/9YhK81jem3GCfkQHkoecUWOl+lOvWHLL7hxklXgZKUGGWq/41e3HLI1QGiao1h3PTYw/ocpwJnBa6KYaE8pGdIAdSyWNUPuT+aFTcmaVPgljZUsaMld/T0xopPU4CmxnRM1QL3sz8T+vk5rwyp9wmaQGJVssClNBTExmX5M+V8iMGFtCmeL2VsKGVFFmbDYFG4K3/PIqaV6UvUr5ul4pVW+yOPJwAqdwDh5cQhXuoAYNYIDwDK/w5jw4L86787FozTnZzDH8gfP5A+oZjQg=</latexit>

instance

x

<latexit sha1_base64="g3vERbzU3kfXibT9Kek4KMf3Qbg=">AAAB6HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQL0FvXhMwDwgWcLspDcZMzu7zMyKIeQLvHhQxKuf5M2/cZLsQRMLGoqqbrq7gkRwbVz328mtrW9sbuW3Czu7e/sHxcOjpo5TxbDBYhGrdkA1Ci6xYbgR2E4U0igQ2ApGtzO/9YhK81jem3GCfkQHkoecUWOl+lOvWHLL7hxklXgZKUGGWq/41e3HLI1QGiao1h3PTYw/ocpwJnBa6KYaE8pGdIAdSyWNUPuT+aFTcmaVPgljZUsaMld/T0xopPU4CmxnRM1QL3sz8T+vk5rwyp9wmaQGJVssClNBTExmX5M+V8iMGFtCmeL2VsKGVFFmbDYFG4K3/PIqaV6UvUr5ul4pVW+yOPJwAqdwDh5cQhXuoAYNYIDwDK/w5jw4L86787FozTnZzDH8gfP5A+oZjQg=</latexit>

instance

z

<latexit sha1_base64="ttW1qeTIMCFH0r4J9Vp1X/WDTKg=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx6r2A9oQ9lsN+3SzSbsToRa+g+8eFDEq//Im//GTZuDtj4YeLw3w8y8IJHCoOt+O4WV1bX1jeJmaWt7Z3evvH/QNHGqGW+wWMa6HVDDpVC8gQIlbyea0yiQvBWMbjK/9ci1EbF6wHHC/YgOlAgFo2il+6dSr1xxq+4MZJl4OalAjnqv/NXtxyyNuEImqTEdz03Qn1CNgkk+LXVTwxPKRnTAO5YqGnHjT2aXTsmJVfokjLUthWSm/p6Y0MiYcRTYzoji0Cx6mfif10kxvPQnQiUpcsXmi8JUEoxJ9jbpC80ZyrEllGlhbyVsSDVlaMPJQvAWX14mzbOqd169ujuv1K7zOIpwBMdwCh5cQA1uoQ4NYBDCM7zCmzNyXpx352PeWnDymUP4A+fzByG2jR4=</latexit>

f(x) = z

<latexit sha1_base64="xaTt9gmRCwgYwNPH9E6z4c6UmRo=">AAAB73icbVBNSwMxEJ34WetX1aOXYBHqpexKQT0IRS8eK9gPaJeSTbNtaDa7JlmxLv0TXjwo4tW/481/Y9ruQVsfDDzem2Fmnh8Lro3jfKOl5ZXVtfXcRn5za3tnt7C339BRoiir00hEquUTzQSXrG64EawVK0ZCX7CmP7ye+M0HpjSP5J0ZxcwLSV/ygFNirNQKSo8n+BI/dQtFp+xMgReJm5EiZKh1C1+dXkSTkElDBdG67Tqx8VKiDKeCjfOdRLOY0CHps7alkoRMe+n03jE+tkoPB5GyJQ2eqr8nUhJqPQp92xkSM9Dz3kT8z2snJjj3Ui7jxDBJZ4uCRGAT4cnzuMcVo0aMLCFUcXsrpgOiCDU2orwNwZ1/eZE0TstupXxxWylWr7I4cnAIR1ACF86gCjdQgzpQEPAMr/CG7tELekcfs9YllM0cwB+gzx+AI478</latexit>

g(z) = x

<latexit sha1_base64="7Pin9+thEeUv5rcDmMZ9k98gJMU=">AAAB73icbVBNSwMxEJ31s9avqkcvwSLUS9mVgnoQil48VrAf0C4lm2bb0GyyJlmxLv0TXjwo4tW/481/Y9ruQVsfDDzem2FmXhBzpo3rfjtLyyura+u5jfzm1vbObmFvv6FlogitE8mlagVYU84ErRtmOG3FiuIo4LQZDK8nfvOBKs2kuDOjmPoR7gsWMoKNlVr90tMJukSP3ULRLbtToEXiZaQIGWrdwlenJ0kSUWEIx1q3PTc2foqVYYTTcb6TaBpjMsR92rZU4IhqP53eO0bHVumhUCpbwqCp+nsixZHWoyiwnRE2Az3vTcT/vHZiwnM/ZSJODBVktihMODISTZ5HPaYoMXxkCSaK2VsRGWCFibER5W0I3vzLi6RxWvYq5YvbSrF6lcWRg0M4ghJ4cAZVuIEa1IEAh2d4hTfn3nlx3p2PWeuSk80cwB84nz+BuI79</latexit>

encoder decodergenerator

Page 9: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Reinforcement learning

Agent Environment

action

reward

state

a = ⇡(s)

<latexit sha1_base64="rCJZQaXE+ju39iDoCmRG6RnhHKw=">AAAB73icbVDLSgNBEOz1GeMr6tHLYBDiJexKQD0IQS8eI5gHJEvonUySIbOz68ysEJb8hBcPinj1d7z5N06SPWhiQUNR1U13VxALro3rfjsrq2vrG5u5rfz2zu7efuHgsKGjRFFWp5GIVCtAzQSXrG64EawVK4ZhIFgzGN1O/eYTU5pH8sGMY+aHOJC8zykaK7XwuhPzkj7rFopu2Z2BLBMvI0XIUOsWvjq9iCYhk4YK1LrtubHxU1SGU8Em+U6iWYx0hAPWtlRiyLSfzu6dkFOr9Eg/UrakITP190SKodbjMLCdIZqhXvSm4n9eOzH9Sz/lMk4Mk3S+qJ8IYiIyfZ70uGLUiLElSBW3txI6RIXU2IjyNgRv8eVl0jgve5Xy1X2lWL3J4sjBMZxACTy4gCrcQQ3qQEHAM7zCm/PovDjvzse8dcXJZo7gD5zPHy1lj20=</latexit>

⇡⇤ = argmax(r0 + �r1 + �2r2 + . . .)

<latexit sha1_base64="ZP4PP0NbEyZm1Nub4vCeYprDb/o=">AAACIXicbVDLSgMxFM3UV62vqks3wSJUC2WmFKwLoejGZQX7gE473EnTNjSZGZKMWEp/xY2/4saFIt2JP2P6WGjrgcC559zLzT1+xJnStv1lJdbWNza3ktupnd29/YP04VFNhbEktEpCHsqGD4pyFtCqZprTRiQpCJ/Tuj+4nfr1RyoVC4MHPYxoS0AvYF1GQBvJS5fciLUv8DV2QfZcAU84Kz075/ZACMDSc3J4ztsFUxWwKXkn1OrcS2fsvD0DXiXOgmTQAhUvPXE7IYkFDTThoFTTsSPdGoHUjHA6TrmxohGQAfRo09AABFWt0ezCMT4zSgd3Q2leoPFM/T0xAqHUUPimU4Duq2VvKv7nNWPdLbVGLIhiTQMyX9SNOdYhnsaFO0xSovnQECCSmb9i0gcJRJtQUyYEZ/nkVVIr5J1i/uq+mCnfLOJIohN0irLIQZeojO5QBVURQc/oFb2jD+vFerM+rcm8NWEtZo7RH1jfP6eooME=</latexit>

Target:

Page 10: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

RL from scratch is slow

figure from [Hasselt et al., AAAI’16 ]

• huge search space• sampling-based exploration

learning with experts/teachers

Page 11: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020
Page 12: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020
Page 13: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Imitation learning

Expert/teacher provide demonstrations

https://www.youtube.com/watch?v=ydnjS__8Ooc

agent learns from demonstrations to imitate the expert

s0 ! a1 ! s1 ! a2 ! s2 ! · · · ! am ! sm

<latexit sha1_base64="75i1TxmHLtYIlk7lixuYR+m3v60=">AAADWXicZVJdi9NAFJ02ftT4sV330ZfBIKygJSkr4otsEcHHFezuQhLCZHrbhs5kwszUbRnm3/hrfNUX8c84SeNq2gvDvfecc89N4OYVK5QOw1+9vnfn7r37gwf+w0ePnxwNj59eKrGWFKZUMCGvc6KAFSVMdaEZXFcSCM8ZXOWrDzV/9RWkKkT5RW8rSDlZlMW8oEQ7KBu+V1mIEy0wyaImqzaTbNz2u5zQmdCqpXhL8WwYhKOwCXxYRG0RoDYusuO+l8wEXXMoNWVEqTgKK50aInVBGVg/WSuoCF2RBcSuLAkHlZrmRy1+4ZAZngvpXqlxg3YmqtlcwyY1C0mqZUE3XT/KYROFbhlXnOhlhzRrLaqCdDEnVFueH4D1dtVFa0MtBFMHYr3kXSzf6wlbCFk42f4H/2Os7ye3GxRoo5biRpRsK2GuXtUNJ+WaME0WqtaWcEMFd9jMJLCpgGobj9M4NT52YRqvPDcfbWaCyMbJCmT5OhxFb4DHOBjj9BY5A57aZsp2belf3yj9z+8UBxF+uSed2J1iLsnKTKz13dlE+0dyWFyO3frRu89nwfmkPaABeoaeo1MUobfoHH1CF2iKKPqGvqMf6Gf/t9fzBp6/k/Z77cwJ6oR38gddjhOJ</latexit>

Page 14: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Types of IL methods

• Copy actions : behavior cloning• Copy intention: apprentice learning• Copy distribution: generative adversarial IL

Page 15: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Copy actions: behavior cloning

demonstration data

split into labeled data

s0 ! a1 ! s1 ! a2 ! s2 ! · · · ! am ! sm

<latexit sha1_base64="75i1TxmHLtYIlk7lixuYR+m3v60=">AAADWXicZVJdi9NAFJ02ftT4sV330ZfBIKygJSkr4otsEcHHFezuQhLCZHrbhs5kwszUbRnm3/hrfNUX8c84SeNq2gvDvfecc89N4OYVK5QOw1+9vnfn7r37gwf+w0ePnxwNj59eKrGWFKZUMCGvc6KAFSVMdaEZXFcSCM8ZXOWrDzV/9RWkKkT5RW8rSDlZlMW8oEQ7KBu+V1mIEy0wyaImqzaTbNz2u5zQmdCqpXhL8WwYhKOwCXxYRG0RoDYusuO+l8wEXXMoNWVEqTgKK50aInVBGVg/WSuoCF2RBcSuLAkHlZrmRy1+4ZAZngvpXqlxg3YmqtlcwyY1C0mqZUE3XT/KYROFbhlXnOhlhzRrLaqCdDEnVFueH4D1dtVFa0MtBFMHYr3kXSzf6wlbCFk42f4H/2Os7ye3GxRoo5biRpRsK2GuXtUNJ+WaME0WqtaWcEMFd9jMJLCpgGobj9M4NT52YRqvPDcfbWaCyMbJCmT5OhxFb4DHOBjj9BY5A57aZsp2belf3yj9z+8UBxF+uSed2J1iLsnKTKz13dlE+0dyWFyO3frRu89nwfmkPaABeoaeo1MUobfoHH1CF2iKKPqGvqMf6Gf/t9fzBp6/k/Z77cwJ6oR38gddjhOJ</latexit>

D =

s0 ! a1

s1 ! a2

· · ·sm�1 ! am

<latexit sha1_base64="fDCJ0ZEKJf4muQMgcjJgriySTBY=">AAADUXicZZLPbtNAEMa3Dn+KCzSFI5cVFlKRaOSNCohbI4TEsUikrWRb1nozSazseq3dDU202jfhabjCjROPwo21GwpORrI885tvvrGlKWpeahPHv/aC3p279+7vPwgPHj56fNg/enKh5VIxGDPJpboqqAZeVjA2peFwVSugouBwWSzeN/3LL6B0KavPZl1DJuisKqclo8ajvP9G5zFOjcQ0JzhNQ9282nLYlCmbSKNbbsUJcZueyPtRPIjbwLsJ2SQR2sR5fhT00olkSwGVYZxqnZC4NpmlypSMgwvTpYaasgWdQeLTigrQmW1/0OEXnkzwVCr/VAa3tDNRT6YGVpmdKVrPS7bq+jEBKxL7ZUILauadpl0aWZe0y7xQr0WxA5vtuksbQyMl1ztiMxddVmzVlM+kKr1s+4P/dVwYprcbNBir5/JaVnytYKpfNYWg1ZJyQ2e60VZwzaTwbGJTWNXAjEuGWZLZEPuwrVdR2A8utxFxSboAVZ3EA/IaRIKjIc5uySmIzLVTrmvL/vqS7D+/YxwR/HJLOnI3iqmiCztyLvRnQ7aPZDe5GPr1g3efTqOz0eaA9tEz9BwdI4LeojP0EZ2jMWLoK/qGvqMfwc/gdw/1ghtpsLeZeYo60Tv4AygOERQ=</latexit>

learning objective

✓⇤ = argmin✓

Es,a⇠D loss(⇡(s|✓), a)

<latexit sha1_base64="IKtskiTQXsTLE+IVmTnmUer9T5Q=">AAADZHicZZJdixMxFIazHT/WcdWui1eCBAehlVpmyop4IWzRBb1bwe4uTMaSSdNpaDIZktRtiflN/hovvNE7f4fph6vTHgicPOc970ng5BVn2sTxj71GcOPmrdv7d8K7B/fuP2gePjzXcqYIHRDJpbrMsaaclXRgmOH0slIUi5zTi3z6dlm/+EKVZrL8ZBYVzQQuSjZmBBuPhs0PyEyowZ+fwzcQYVUgwcrhmsHTodUdjDQT8J2DCCJD50YJy6XWroUq1tJf19J2B7eHzSjuxquAu0mySSKwibPhYSNAI0lmgpaGcKx1msSVySxWhhFOXYhmmlaYTHFBU5+WWFCd2dWfHXzmyQiOpfKnNHBFax3VaOzfm9lC4WrCyLzuRwSdJ7EfJrTAZlIr2pmRFcN15oV6IfIduJyu63RpaKTkekdsJqLO8q075oVUzMu2H/yv4sIQXU/Q1Fg9kVey5AtFx7qzvAhczjA3uNBLbUmviBSejSyi84oS49JelmY2hD7syivP7akb2ihxKZpSVb6Iu8lLKlIY9WB2TY6pyNyqy9VtyV/fJPvPrwWjBLa3pH23VowVntq+c6Ffm2R7SXaT854f33398Tg66W8WaB88Bk9BCyTgFTgB78EZGAACvoHv4Cf41fgdHARHwaO1tLG36TkCtQie/AHZHRoK</latexit>

Page 16: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Behavior cloning examples

D =

s0 ! a1

s1 ! a2

· · ·sm�1 ! am

<latexit sha1_base64="fDCJ0ZEKJf4muQMgcjJgriySTBY=">AAADUXicZZLPbtNAEMa3Dn+KCzSFI5cVFlKRaOSNCohbI4TEsUikrWRb1nozSazseq3dDU202jfhabjCjROPwo21GwpORrI885tvvrGlKWpeahPHv/aC3p279+7vPwgPHj56fNg/enKh5VIxGDPJpboqqAZeVjA2peFwVSugouBwWSzeN/3LL6B0KavPZl1DJuisKqclo8ajvP9G5zFOjcQ0JzhNQ9282nLYlCmbSKNbbsUJcZueyPtRPIjbwLsJ2SQR2sR5fhT00olkSwGVYZxqnZC4NpmlypSMgwvTpYaasgWdQeLTigrQmW1/0OEXnkzwVCr/VAa3tDNRT6YGVpmdKVrPS7bq+jEBKxL7ZUILauadpl0aWZe0y7xQr0WxA5vtuksbQyMl1ztiMxddVmzVlM+kKr1s+4P/dVwYprcbNBir5/JaVnytYKpfNYWg1ZJyQ2e60VZwzaTwbGJTWNXAjEuGWZLZEPuwrVdR2A8utxFxSboAVZ3EA/IaRIKjIc5uySmIzLVTrmvL/vqS7D+/YxwR/HJLOnI3iqmiCztyLvRnQ7aPZDe5GPr1g3efTqOz0eaA9tEz9BwdI4LeojP0EZ2jMWLoK/qGvqMfwc/gdw/1ghtpsLeZeYo60Tv4AygOERQ=</latexit>

improvement in prepare the labeled data: remove highly correlated data

quickly learn a rough policy, no trial-and-error costbut with limited power

used human player data to initialize the policy in AlphaGo and AlphaStar

Page 17: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Behavior cloning limitation

Supervised learning objective Reinforcement learning objective

argmin✓

Ex⇠D loss(f✓(x), y(x))<latexit sha1_base64="1xDDF2NgKSjn1X59ZKKFTSSDkb4=">AAADbHicZVJNbxMxEHUTPsrylQK3CskiAiVSG+2GIq6toBLcWom0ldarldeZbFax1yvboYks/y5+CweucOM34GxDYZORbI3fvPdmLE1W8UKbMPy+02rfuXvv/u6D4OGjx0+edvaeXWg5VwxGTHKprjKqgRcljExhOFxVCqjIOFxmsw+r+uVXULqQ5RezrCARNC+LScGo8VDaOSdU5UQUZUrMFAzFp6klta1VMHYLoguBiaBmyijHHx3BxMDCWC61dr2JJ9cy11v0D/DS3/200w0HYR14O4nWSRet4yzda7XJWLK5gNIwTrWOo7AyiaXKFIyDC8hcQ0XZjOYQ+7SkAnRi6yEdfu2RMZ5I5U9pcI02FNV44idObK5oNS3YounHBCyi0DcTevXJRtHOjawK2sQ8US9FtgWuuusmujI0UnK9RTZT0cSyjTfluVSFp20O/K/igoDcdtBgrJ7Ka1nypYKJPlg9BC3nlBua6xW3hGsmhcfGlsCiAmZcPEzixAbYh629ssyeutR2IxeTGajyMBxE70DEuDvEyS1yBCJxtco1bdlf3yj5z6+HuxHub1BP3A1joujMnjgX+LWJNpdkO7kYDqK3g+H5Uff483qBdtE+eoV6KELv0TH6hM7QCDH0Df1AP9Gv1u/2i/Z+++UNtbWz1jxHjWi/+QNm5B5y</latexit>

arg min✓

Es⇠D⇡✓ cost(s,⇡✓(s))

<latexit sha1_base64="wdHWYyhOVm9ln183OTUwu6BkZcQ=">AAADeHicZVJNj9MwEPU2fCzhqwtHLhYRopWWKqkWIW67gpXgtkh0d6U4VI47Ta3GcWS7bCvLv45fwU/gCjdOJNmwkHYkSzPvvXkzliYtc65NGH7f63m3bt+5u3/Pv//g4aPH/YMn51quFIMJk7lUlynVkPMCJoabHC5LBVSkOVyky3c1f/EVlOay+Gw2JSSCZgWfc0ZNBU37CaEqI4IXU2IWYCg+nVrS2FoFM6eJ5gITQc2C0Ry//2JJyVupcwQTA2tjmdTGDfQhrknbsgM9HE77QTgKm8C7SdQmAWrjbHrQ88hMspWAwrCcah1HYWkSS5XhLAfnk5WGkrIlzSCu0oIK0IltFnb4RYXM8Fyq6hUGN2ino5zNq4UTmylaLjhbd/2YgHUUVsOErj/cIe3KyJLTLlYJ9UakO2A9XXfR2tBImesdsVmILpZu1TTPpOKVbHvhf4zzfXIzQYOxeiGvZJFvFMz1YV0IWqxobmima20BV0yKCptZAusSmHHxOIkT6+MqbOOVpvbUTW0QuZgsQRWvwlH0GkSMgzFObpAjEIlrulzXlv31jZL//AY4iPBwS3rirhVzRZf2xDm/Opto+0h2k/NxNX709tNRcPyxPaB99Aw9RwMUoTfoGH1AZ2iCGPqGfqCf6Ffvt4e9l97wWtrba3ueok544z8svCQ8</latexit>

e.g. cost = -reward

Page 18: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Behavior cloning limitation

[S. Levine, CS-294-112-2]

Compounding error:

Supervised learning objective Reinforcement learning objective

argmin✓

Ex⇠D loss(f✓(x), y(x))<latexit sha1_base64="1xDDF2NgKSjn1X59ZKKFTSSDkb4=">AAADbHicZVJNbxMxEHUTPsrylQK3CskiAiVSG+2GIq6toBLcWom0ldarldeZbFax1yvboYks/y5+CweucOM34GxDYZORbI3fvPdmLE1W8UKbMPy+02rfuXvv/u6D4OGjx0+edvaeXWg5VwxGTHKprjKqgRcljExhOFxVCqjIOFxmsw+r+uVXULqQ5RezrCARNC+LScGo8VDaOSdU5UQUZUrMFAzFp6klta1VMHYLoguBiaBmyijHHx3BxMDCWC61dr2JJ9cy11v0D/DS3/200w0HYR14O4nWSRet4yzda7XJWLK5gNIwTrWOo7AyiaXKFIyDC8hcQ0XZjOYQ+7SkAnRi6yEdfu2RMZ5I5U9pcI02FNV44idObK5oNS3YounHBCyi0DcTevXJRtHOjawK2sQ8US9FtgWuuusmujI0UnK9RTZT0cSyjTfluVSFp20O/K/igoDcdtBgrJ7Ka1nypYKJPlg9BC3nlBua6xW3hGsmhcfGlsCiAmZcPEzixAbYh629ssyeutR2IxeTGajyMBxE70DEuDvEyS1yBCJxtco1bdlf3yj5z6+HuxHub1BP3A1joujMnjgX+LWJNpdkO7kYDqK3g+H5Uff483qBdtE+eoV6KELv0TH6hM7QCDH0Df1AP9Gv1u/2i/Z+++UNtbWz1jxHjWi/+QNm5B5y</latexit>

arg min✓

Es⇠D⇡✓ cost(s,⇡✓(s))

<latexit sha1_base64="wdHWYyhOVm9ln183OTUwu6BkZcQ=">AAADeHicZVJNj9MwEPU2fCzhqwtHLhYRopWWKqkWIW67gpXgtkh0d6U4VI47Ta3GcWS7bCvLv45fwU/gCjdOJNmwkHYkSzPvvXkzliYtc65NGH7f63m3bt+5u3/Pv//g4aPH/YMn51quFIMJk7lUlynVkPMCJoabHC5LBVSkOVyky3c1f/EVlOay+Gw2JSSCZgWfc0ZNBU37CaEqI4IXU2IWYCg+nVrS2FoFM6eJ5gITQc2C0Ry//2JJyVupcwQTA2tjmdTGDfQhrknbsgM9HE77QTgKm8C7SdQmAWrjbHrQ88hMspWAwrCcah1HYWkSS5XhLAfnk5WGkrIlzSCu0oIK0IltFnb4RYXM8Fyq6hUGN2ino5zNq4UTmylaLjhbd/2YgHUUVsOErj/cIe3KyJLTLlYJ9UakO2A9XXfR2tBImesdsVmILpZu1TTPpOKVbHvhf4zzfXIzQYOxeiGvZJFvFMz1YV0IWqxobmima20BV0yKCptZAusSmHHxOIkT6+MqbOOVpvbUTW0QuZgsQRWvwlH0GkSMgzFObpAjEIlrulzXlv31jZL//AY4iPBwS3rirhVzRZf2xDm/Opto+0h2k/NxNX709tNRcPyxPaB99Aw9RwMUoTfoGH1AZ2iCGPqGfqCf6Ffvt4e9l97wWtrba3ueok544z8svCQ8</latexit>

e.g. cost = -reward

Page 19: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Behavior cloning limitation - formally

Consider T-step reinforcement learning with bounded reward [0,1]

s0 ! a⇤1 ! s1 ! a⇤

2 ! s2 ! · · · ! a⇤T ! sT

<latexit sha1_base64="RgeCHJdqJHRvrdziStPX1byMhtA=">AAADX3icZZJRb9MwEMe9BtgIMDp4QrxYREgDQZVUQ4i3VYDE45DabVISIse9plHtOIpd1sryF+LT8Dre+CY4aSikPcny+Xf/+18iXVqyXCrfvz3oOXfu3js8uu8+ePjo+HH/5MmlFMuKwoQKJqrrlEhgeQETlSsG12UFhKcMrtLFx7p+9R0qmYtirNYlxJxkRT7LKVEWJf1PMvFxpAQm314nQZPJ9q7JsCWbO6JToeS2OG6L46Tv+QO/CbyfBG3ioTYukpOeE00FXXIoFGVEyjDwSxVrUqmcMjButJRQErogGYQ2LQgHGevmdw1+ackUz0RlT6FwQzsd5XSmYBXrrCLlPKerrh/lsAp8O4xLTtS8U9RLJcqcdJkVyjVP92A9XXZpbaiEYHJPrOa8y9KdN2GZqHIr2/3gfxXjutF2ggSl5VzciIKtK5jJN/WDk2JJmCKZrLUF3FDBLZvqCFYlUGXCYRzG2sU2dOOVpvqzSbQXmDBaQFW89QfBO+Ah9oY43pIz4LFpukzXlv71DeL//E6xF+BXO9KR2ShmFVnokTGuXZtgd0n2k8uhHT/48PXMOx+1C3SEnqMX6BQF6D06R1/QBZogin6gn+gW/er9dg6dY6e/kfYO2p6nqBPOsz+AIxUr</latexit>

J(✓) = Es,a,r⇠⇡✓ [1

T

TX

t=1

rt]

<latexit sha1_base64="E8uKHvAguUtMMRbPAKvRNmZ+JvI=">AAADXnicZZLfatswFMbVeOs6b13T7WawGzEzSCELdugYuyg0lMLYVQdJW7A9IysniYhlGUlZE4QeaE+z2/VujzLlz7o5OSA4+p3vfEeCk1cFUzoM7/ca3qPH+08OnvrPnh++OGoev7xWYiYpDKgohLzNiYKClTDQTBdwW0kgPC/gJp9eLOs330EqJsq+XlSQcjIu2YhRoh3KmhdfWomegCYn+AxfZka1SVsminGcVCxbl2yMk5Ek1ETW9C1O1IxnRp9F9lsfy0ynWTMIO+Eq8G4SbZIAbeIqO254yVDQGYdS04IoFUdhpVNDpGa0AOsnMwUVoVMyhtilJeGgUrP6rcXvHBnikZDulBqvaK2jGo40zFMzlqSaMDqv+1EO8yh0w7jiRE9qRTPTomKkzpxQLXi+A5fTVZ0uDbUQhdoR6wmvs3zrToqxkMzJth/8r2J9P3mYoEAbNRF3oiwWEkaqvbxwUs5IoclYLbUl3FHBHRuaBOYVUG3jbhqnxscuzMorz82lzUwQ2TiZgizfh53oA/AYB12cPpBT4Kldddm6Lf3rG6X/+bVwEOGTLWnPrhVuk6amZ63v1ibaXpLd5Lrrxnc+fT0NznubBTpAb9Bb1EIR+ojO0Wd0hQaIoh/oJ/qF7hu/vX3v0DtaSxt7m55XqBbe6z+2ERdm</latexit>

We have data from the optimal policy

We apply BC(SL) to imitate the policy with a small classification error

Es,a⇤ [⇡(s) 6= a⇤] ✏

<latexit sha1_base64="dKZt3WKXywHuA9XfzoxRn//t2Zo=">AAADRHicZZLNbhMxEMfdLR9l+WgLRy4WK6QElWg3KkK9NUKVOBaJtJV2l8jrTBIr/ljWTpvI8jPwNFzhxjvwDtwQV4Q3DYVNRrI0/s1//mNLU5ScaRPH37eC7Vu379zduRfef/Dw0e7e/uMzrWYVhT5VXFUXBdHAmYS+YYbDRVkBEQWH82L6pq6fX0KlmZLvzaKEXJCxZCNGifFosNfGJwOrD8iHFw6nWclaup1J+Ig9yHHGfZZBqRmvtVHciZeBN5NklURoFaeD/WA7Gyo6EyAN5UTrNIlLk1tSGUY5uDCbaSgJnZIxpD6VRIDO7fJPDj/3ZIhHqvJHGrykjY5yODIwz+24IuWE0XnTjwqYJ7EfJrQgZtIo2plRJSNN5oV6IYoNWE/XTVobGqW43hCbiWiyYu1O+FhVzMvWH/yv4sIwu5mgwVg9UVdK8kUFI31QXwSRM8INGetaK+GKKuHZ0GYwL4Eal3bzNLch9mGXXkVhT9zARolLsylU8mXcSV6BSHHUxfkNOQSRu2WXa9rSv75J/p9fC0cJbq9Je+5aMarI1PacC/3aJOtLspmcdf34ztG7w+i4t1qgHfQUPUMtlKDX6Bi9Raeojyj6hD6jL+hr8C34EfwMfl1Lg61VzxPUiOD3HxqGEFs=</latexit>

Then the BC policy has a return as

J(⇡BC) � J(⇡⇤)� T + 1

2✏

<latexit sha1_base64="Bgn+Lm4gtTNc+CkaNt06bxb4Om4=">AAADTHicZZLPb9MwFMe9FMYIP7bBkYtFhNQCq5JqCHFbmZAQpyGt26QkVI77mlq14xC7rJXlv4O/hivcuPN/cENIuFkZpH2SpefP+77vs6WXlZwpHYY/trzWjZvbt3Zu+3fu3ru/u7f/4EzJWUVhQCWX1UVGFHBWwEAzzeGirICIjMN5Nj1e1s8/QaWYLE71ooRUkLxgY0aJdmi4F71rJyUbmtfHtoOTHD7iGnx42sEHOBlXhJrTZ5E1PZtAqRhf9gRhN6wDbybRKgnQKk6G+14rGUk6E1BoyolScRSWOjWk0oxysH4yU1ASOiU5xC4tiACVmvpvFj9xZITHsnKn0LimjY5yNNYwT01ekXLC6LzpRwXMo9ANE0oQPWkUzUzLkpEmc0K1ENkGXE5XTbo01FJytSHWE9Fk2dqd8FxWzMnWH/yvYn0/uZ6gQBs1kZey4IsKxur58iJIMSNck1wttQVcUikcG5kE5iVQbeNeGqfGxy5M7ZVl5o0dmiCycTKFqjgIu9ELEDEOeji9JocgUlt32aYt/esbpf/5tXEQ4c6atG+vFG6DpqZvre/WJlpfks3krOfGd1+9PwyO+qsF2kGP0GPURhF6iY7QW3SCBoiiz+gL+oq+ed+9n94v7/eV1Nta9TxEjWht/wFsIRIk</latexit>

[John Langford and Bianca Zadrozny Reducing T-step Reinforcement Learning to Classification.]

Page 20: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Proof idea

s0 ! a⇤1 ! s1 ! a⇤

2 ! s2 ! · · · ! a⇤T ! sT

<latexit sha1_base64="RgeCHJdqJHRvrdziStPX1byMhtA=">AAADX3icZZJRb9MwEMe9BtgIMDp4QrxYREgDQZVUQ4i3VYDE45DabVISIse9plHtOIpd1sryF+LT8Dre+CY4aSikPcny+Xf/+18iXVqyXCrfvz3oOXfu3js8uu8+ePjo+HH/5MmlFMuKwoQKJqrrlEhgeQETlSsG12UFhKcMrtLFx7p+9R0qmYtirNYlxJxkRT7LKVEWJf1PMvFxpAQm314nQZPJ9q7JsCWbO6JToeS2OG6L46Tv+QO/CbyfBG3ioTYukpOeE00FXXIoFGVEyjDwSxVrUqmcMjButJRQErogGYQ2LQgHGevmdw1+ackUz0RlT6FwQzsd5XSmYBXrrCLlPKerrh/lsAp8O4xLTtS8U9RLJcqcdJkVyjVP92A9XXZpbaiEYHJPrOa8y9KdN2GZqHIr2/3gfxXjutF2ggSl5VzciIKtK5jJN/WDk2JJmCKZrLUF3FDBLZvqCFYlUGXCYRzG2sU2dOOVpvqzSbQXmDBaQFW89QfBO+Ah9oY43pIz4LFpukzXlv71DeL//E6xF+BXO9KR2ShmFVnokTGuXZtgd0n2k8uhHT/48PXMOx+1C3SEnqMX6BQF6D06R1/QBZogin6gn+gW/er9dg6dY6e/kfYO2p6nqBPOsz+AIxUr</latexit>

reward loss at step 1

<latexit sha1_base64="HlQaIr8OvtcmyGwQ9Tt+I69jX8M=">AAADIXicZZJNi9RAEIZ7s36s8WtXj16CQVhBh2RYEW87iOBxBWd3IAlDp6cm00x/hO6OO0PTf8Kr3vw13sSb+GfsZMfVzBQ0VD311lt9qLJmVJsk+bUX7N+4eev2wZ3w7r37Dx4eHj0617JRBMZEMqkmJdbAqICxoYbBpFaAecngoly+bfsXn0BpKsVHs66h4LgSdE4JNh5Ncqg1ZVJMD+NkkHQR7SbpJonRJs6mR8F+PpOk4SAMYVjrLE1qU1isDCUMXJg3GmpMlriCzKcCc9CF7T7someezKK5VP4JE3W0N1HP5gZWha0UrheUrPp+hMMqTfwyrjk2i17TNkbWFPeZF+o1L3dgu133aWtopGR6R2wWvM/KrRqzSirqZdsf/tdxYZhfb9BgrF7ISynYWsFcv2gLjkWDmcGVbrUCLonkns1sDqsaiHHZsMgKG0Y+bOdVlvadm9o4dVm+BCVeJoP0FfAsiodRcU1OgBeum3J9W/LXNy3+8zuO4jR6viUduSvFXOGlHTkX+rNJt49kNzkf+vWDNx9O4tPR5oAO0BP0FB2jFL1Gp+g9OkNjRBBDn9EX9DX4FnwPfgQ/r6TB3mbmMepF8PsPyBUERA==</latexit>

J(⇡BC)

<latexit sha1_base64="sLglKOiph2XN21eKL0PjfbSCMkM=">AAADJHicZZJNb9QwEIbdlI8Svlo4comIkLYSrJJVEeLWpUJCnIrEtpWSaOV4J1lr7TiyHboryz+DK9z4NdwQBy78Fpx0KWR3JEszz7zzjg+T14wqHUW/drzdGzdv3d6749+9d//Bw/2DR2dKNJLAhAgm5EWOFTBawURTzeCiloB5zuA8X5y0/fNPIBUV1Ue9qiHjuKxoQQnWDiXvB2lNp+bNiT2c7ofRMOoi2E7idRKidZxOD7zddCZIw6HShGGlkjiqdWaw1JQwsH7aKKgxWeASEpdWmIPKTPdnGzxzZBYUQrpX6aCjvYl6VmhYZqaUuJ5Tsuz7EQ7LOHLLuOJYz3tN02hRU9xnTqhWPN+C7XbVp62hFoKpLbGe8z7LN2rMSiGpk21++F/H+n56vUGBNmouLkXFVhIK9bwtOK4azDQuVaut4JII7tjMpLCsgWibjLIkM37gwnReeW7e2qkJY5ukC5DVi2gYvwSeBOEoyK7JEfDMdlO2b0v++sbZf36DIIyDww3p2F4pCokXZmyt784m3jyS7eRs5NYPX384Co/H6wPaQ0/QUzRAMXqFjtE7dIomiCCBPqMv6Kv3zfvu/fB+Xkm9nfXMY9QL7/cfKqsEuA==</latexit>

0

<T

T✏

<latexit sha1_base64="rRCW5bwsFyobTjAA6HyunAa5eVY=">AAADMXicZZJNa9wwEIYVpx+p+5W00EsvoqaQQrvYS0oI9JClFHpMIZsEbLPI2lmvWMkyktzsourP9Nre+mtyK732T1TebNN6d5Bh9Mw778gwRc2ZNnF8tRVs37p95+7OvfD+g4ePHu/uPTnTslEUhlRyqS4KooGzCoaGGQ4XtQIiCg7nxex9Wz//DEozWZ2aRQ25IGXFJowS49Fo99k7nE0UofbU+YMzqDXjbSGKe/Ey8GaSrJIIreJktBdsZ2NJGwGVoZxonSZxbXJLlGGUgwuzRkNN6IyUkPq0IgJ0bpc/4PBLT8Z4IpX/KoOXtNNRjycG5rktFamnjM67flTAPIn9MKEFMdNO0TZG1ox0mRfqhSg2YDtdd2lraKTkekNspqLLirU74aVUzMvWH/yv4sIwu5mgwVg9lZey4gsFE/26vQhSNYQbUupWW8EllcKzsc1gXgM1Lu3naW5D7MMuvYrCfnAjGyUuzWagqjdxL3kLIsVRH+c35ABE7pZdrmtL//om+X9++zhK8Ks16cBdK/wCzezAudCvTbK+JJvJWd+P7x19OoiOB6sF2kHP0Qu0jxJ0iI7RR3SChoiiL+gr+oa+Bz+Cq+Bn8OtaGmytep6iTgS//wBzkQoN</latexit>

reward loss at step 2 <T � 1

T✏

<latexit sha1_base64="kHRfOGpF/7x9+cwTC5UIDYDFgdM=">AAADM3icZZJNaxsxEIaVTT+S7UeS9lLoRXQppNCYXZNSCj3ElEKPKdhJYHcxWnlsC0urRZIbG6H+ml7bW39M6S302v9Q7cZNu/aAYPTMO+9IMEXFmTZx/GMr2L51+87dnd3w3v0HD/f2Dx6daTlXFAZUcqkuCqKBsxIGhhkOF5UCIgoO58XsXV0//wRKM1n2zbKCXJBJycaMEuPRcP/JW5yNFaG2f5Q423c4g0ozXpeiuBM3gTeTZJVEaBWnw4NgOxtJOhdQGsqJ1mkSVya3RBlGObgwm2uoCJ2RCaQ+LYkAndvmCw4/92SEx1L5Uxrc0FZHNRobWOR2okg1ZXTR9qMCFknshwktiJm2inZuZMVIm3mhXopiA9bTdZvWhkZKrjfEZirarFi7Ez6RinnZ+oP/VVwYZjcTNBirp/JSlnypYKxf1hdByjnhhkx0rS3hkkrh2chmsKiAGpd28zS3IfZhG6+isO/d0EaJS7MZqPIo7iSvQKQ46uL8hhyDyF3T5dq29K9vkv/nd4ijBL9Yk/bctcKv0Mz2nAv92iTrS7KZnHX9+M6bj8fRSW+1QDvoKXqGDlGCXqMT9AGdogGi6DP6gr6ib8H34GdwFfy6lgZbq57HqBXB7z/KbQp/</latexit>

reward loss at step t <T � t

T✏

<latexit sha1_base64="jFqVbu5BqJ1riPEq62ZigCO882E=">AAADM3icZZJNaxsxEIaVTT+S7UeS9lLoRXQppNCYXZNSCj3ElEKPKdhJYHcxWnlsC0urRZIbG6H+ml7bW39M6S302v9Q7cZNu/aAYPTMO+9IMEXFmTZx/GMr2L51+87dnd3w3v0HD/f2Dx6daTlXFAZUcqkuCqKBsxIGhhkOF5UCIgoO58XsXV0//wRKM1n2zbKCXJBJycaMEuPRcP/JW5yNFaG2f2Sc7TucQaUZr0tR3ImbwJtJskoitIrT4UGwnY0knQsoDeVE6zSJK5NbogyjHFyYzTVUhM7IBFKflkSAzm3zBYefezLCY6n8KQ1uaKujGo0NLHI7UaSaMrpo+1EBiyT2w4QWxExbRTs3smKkzbxQL0WxAevpuk1rQyMl1xtiMxVtVqzdCZ9Ixbxs/cH/Ki4Ms5sJGozVU3kpS75UMNYv64sg5ZxwQya61pZwSaXwbGQzWFRAjUu7eZrbEPuwjVdR2PduaKPEpdkMVHkUd5JXIFIcdXF+Q45B5K7pcm1b+tc3yf/zO8RRgl+sSXvuWuFXaGZ7zoV+bZL1JdlMzrp+fOfNx+PopLdaoB30FD1DhyhBr9EJ+oBO0QBR9Bl9QV/Rt+B78DO4Cn5dS4OtVc9j1Irg9x+DhQrC</latexit>

<T + 1

2✏

<latexit sha1_base64="Hu/+hzqu5mxwB5Wv2HGf0xM5a/E=">AAADM3icZZJdaxQxFIbTqR91/GirN4I3wUGoqMvMUhHBiy4ieFmh2xZmhiWTPTMbNpkMSdbuEuKv8Vbv/DHinXjrfzCzXauzeyBw8pz3vCeBUzScaRPH37eC7WvXb9zcuRXevnP33u7e/v1TLWeKwpBKLtV5QTRwVsPQMMPhvFFARMHhrJi+betnH0FpJusTs2ggF6SqWckoMR6N9h6+wVmpCLUnzxJn+w5n0GjG21IU9+Jl4M0kWSURWsXxaD/YzsaSzgTUhnKidZrEjcktUYZRDi7MZhoaQqekgtSnNRGgc7v8gsNPPBnjUip/aoOXtNPRjEsD89xWijQTRuddPypgnsR+mNCCmEmnaGdGNox0mRfqhSg2YDtdd2lraKTkekNsJqLLirU74ZVUzMvWH/yv4sIwu5qgwVg9kRey5gsFpX7eXgSpZ4QbUulWW8MFlcKzsc1g3gA1Lu3naW5D7MMuvYrCvnMjGyUuzaag6hdxL3kJIsVRH+dX5BBE7pZdrmtL//om+X9+BzhK8NM16cBdKvwKTe3AudCvTbK+JJvJad+P773+cBgdDVYLtIMeocfoACXoFTpC79ExGiKKPqHP6Av6GnwLfgQ/g1+X0mBr1fMAdSL4/QdnZQpb</latexit>

total loss

[John Langford and Bianca Zadrozny Reducing T-step Reinforcement Learning to Classification.]

<latexit sha1_base64="HlQaIr8OvtcmyGwQ9Tt+I69jX8M=">AAADIXicZZJNi9RAEIZ7s36s8WtXj16CQVhBh2RYEW87iOBxBWd3IAlDp6cm00x/hO6OO0PTf8Kr3vw13sSb+GfsZMfVzBQ0VD311lt9qLJmVJsk+bUX7N+4eev2wZ3w7r37Dx4eHj0617JRBMZEMqkmJdbAqICxoYbBpFaAecngoly+bfsXn0BpKsVHs66h4LgSdE4JNh5Ncqg1ZVJMD+NkkHQR7SbpJonRJs6mR8F+PpOk4SAMYVjrLE1qU1isDCUMXJg3GmpMlriCzKcCc9CF7T7someezKK5VP4JE3W0N1HP5gZWha0UrheUrPp+hMMqTfwyrjk2i17TNkbWFPeZF+o1L3dgu133aWtopGR6R2wWvM/KrRqzSirqZdsf/tdxYZhfb9BgrF7ISynYWsFcv2gLjkWDmcGVbrUCLonkns1sDqsaiHHZsMgKG0Y+bOdVlvadm9o4dVm+BCVeJoP0FfAsiodRcU1OgBeum3J9W/LXNy3+8zuO4jR6viUduSvFXOGlHTkX+rNJt49kNzkf+vWDNx9O4tPR5oAO0BP0FB2jFL1Gp+g9OkNjRBBDn9EX9DX4FnwPfgQ/r6TB3mbmMepF8PsPyBUERA==</latexit>

0

Page 21: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

More advanced theory

[Tian Xu, Ziniu Li, Yang Yu. On Value Discrepancy of Imitation Learning. https://arxiv.org/abs/1911.07027]

J(✓) = Es,a,r⇠⇡✓ [1X

t=1

�t�1rt]

<latexit sha1_base64="06Vt4gdXPE3u1cEHQhehpmIMPiQ=">AAADZHicZVJda9swFFXjfXRet6YrexoMsTBIIQ126Bh7KDSMwranDpa2YLtGVq4dEcsykrwmCP2m/Zo97GV72++Y87FuTi4I7j3n3HMldJMyZ0p73o+dlnPv/oOHu4/cx3tPnu63D55dKlFJCiMqciGvE6IgZwWMNNM5XJcSCE9yuEqm7xf81VeQionii56XEHGSFSxllOgaitsfP3VDPQFNjvApPo+N6pGeDBXjOCxZvKJsgENV8djoU9/ehKxI9RyHGeGc3Bh97FssYx3F7Y7X95aBtxN/nXTQOi7ig5YTjgWtOBSa5kSpwPdKHRkiNaM5WDesFJSETkkGQZ0WhIOKzPLNFr+ukTFOhaxPofESbXSU41TDLDKZJOWE0VnTj3KY+V49jCtO9KRBmkqLkpEmVgvVnCdb4GK6aqILQy1ErrbEesKbWLJRkzwTktWyzQv/Y6zrhncTFGijJuJWFPlcQqp6i4KToiK5JplaaAu4paL+qmJsQpiVQLUNBlEQGRfXYZZeSWLObWw6vg3CKcji2Ov7b4AHuDPA0R1yAjyyyy7btKV/ff3oP78u7vj4aEM6tCtFKsnUDK1167XxN5dkO7kc1OP77z6fdM6G6wXaRS/QK9RFPnqLztAHdIFGiKJv6Dv6iX61fjt7zqHzfCVt7ax7DlEjnJd/AHEeGe4=</latexit>

Consider continuing reinforcement learning with discount reward

1

1 � �

<latexit sha1_base64="ZFDiGM2sftH/eLTwfrP3EsfS55k=">AAADLXicZVJNa9tAEN0o/UjVLyc99iIqCik0RmtSQm4xpdBjCnUSkIRZrcey8H6I3VVjs+xf6bW99df0UCi99m90rbhpZQ8svHnz5s3ATlGzSpsk+bET7N65e+/+3oPw4aPHT5729g8utGwUhRGVTKqrgmhglYCRqQyDq1oB4QWDy2L+dlW//ARKV1J8NMsack5KUU0rSoynxr2DbKoItdhZfJSVhHPixr046SdtRNsAr0GM1nE+3g92s4mkDQdhKCNapzipTW6JMhVl4MKs0VATOiclpB4KwkHntl3eRS89M4mmUvknTNSynY56MjWwyG2pSD2r6KLrRzkscOKHcc2JmXWKtjGyrkiX80K95MUWuZquu+zK0EjJ9JbYzHiXKzZywkqpKi/bXPhfxYVhdjtBg7F6Jq+lYEsFU/16lXAiGsIMKfVKK+CaSv9BYmIzWNRAjUsHeZrbMPJhW6+isO/c2MbYpdkclDhK+vgN8DSKB1F+yxwDz13b5bq29K8vzv/zO4xiHL3akA7djcKfz9wOnQv92eDNI9kGFwM/vn/64Tg+G64PaA89Ry/QIcLoBJ2h9+gcjRBFC/QZfUFfg2/B9+Bn8OtGGuyse56hTgS//wCO0ghZ</latexit>

is the total weightsor effective horizon

Page 22: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Copy actions: with super-expert

DAgger: Dataset Aggregation

A sleepless expert is available to provide actions at any time

[S. Ross et al., AISTATS’2011]

1. start from random policy2. run the policy to collect states3. ask the expert to provide optimal actions4. aggregate labeled dataset5. learn policy by BC6. repeat from step 2

The policy is closer to the expert

Page 23: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Copy intention: apprentice learning

We view expert as a learning agent

environment: MDP (S,A,T)reward function:learned optimal policy: that achieves the optimal return

RE

<latexit sha1_base64="2K8Tne7XmCmkgfLrA1xrKjkCa7k=">AAADHHicZZJNa9wwEIYVpx+p+5GkPfZiagoptIu9pITespRAj+nHJgHbLLJ27BUrWUaSk12EfkKv7a2/prfSa6H/prKzTevdAcHMM++8o8PkNaNKR9HvLW/71u07d3fu+fcfPHy0u7f/+EyJRhIYE8GEvMixAkYrGGuqGVzUEjDPGZzn87dt//wSpKKi+qSXNWQclxUtKMHaoY8fJieTvTAaRF0Em0m8SkK0itPJvredTgVpOFSaMKxUEke1zgyWmhIG1k8bBTUmc1xC4tIKc1CZ6f5qg+eOTINCSPcqHXS0N1FPCw2LzJQS1zNKFn0/wmERR24ZVxzrWa9pGi1qivvMCdWS5xuw3a76tDXUQjC1IdYz3mf5Wo1ZKSR1svUP/+tY309vNijQRs3ElajYUkKhXrYFx1WDmcalarUVXBHBHZuaFBY1EG2TYZZkxg9cmM4rz82JnZgwtkk6B1m9igbxa+BJEA6D7IYcAs9sN2X7tuSvb5z953cQhHHwYk06steKQuK5GVnru7OJ149kMzkbuvWDN+8Pw+PR6oB20FP0DB2gGB2hY/QOnaIxIqhEn9EX9NX75n33fng/r6Xe1mrmCeqF9+sPjnkBsg==</latexit>

⇡E

<latexit sha1_base64="ynm5QubOGmy2dMKy+c7nU0mCnAU=">AAADHnicZZJNb9NAEIa3Lh/FfLSFIxcLC6lIENlREeLWCFXiWCTSVrKtaL0ZJ6vsh7W7polW+xu4wo1fww1xhX/D2g0FJyOtNPPMO+/sYcqaUW2S5PdOsHvr9p27e/fC+w8ePto/OHx8rmWjCIyJZFJdllgDowLGhhoGl7UCzEsGF+XiXdu/+ARKUyk+mlUNBcczQStKsPFonNd0cjo5iJNB0kW0naTrJEbrOJscBrv5VJKGgzCEYa2zNKlNYbEylDBwYd5oqDFZ4BlkPhWYgy5s91sXPfdkGlVS+SdM1NHeRD2tDCwLO1O4nlOy7PsRDss08cu45tjMe03bGFlT3GdeqFe83ILtdt2nraGRkuktsZnzPis3asxmUlEv2/zwv44Lw/xmgwZj9VxeScFWCir9si04Fg1mBs90qxVwRST3bGpzWNZAjMuGRVbYMPJhO6+ytKduYuPUZfkClHiVDNLXwLMoHkbFDTkGXrhuyvVtyV/ftPjP7yiK0+jFhnTkrhWVwgs7ci70Z5NuHsl2cj706wdvPxzHJ6P1Ae2hp+gZOkIpeoNO0Ht0hsaIIIo+oy/oa/At+B78CH5eS4Od9cwT1Ivg1x9FLAKp</latexit>

s0 ! a⇤1 ! s1 ! a⇤

2 ! s2 ! · · · ! a⇤T ! sT

<latexit sha1_base64="RgeCHJdqJHRvrdziStPX1byMhtA=">AAADX3icZZJRb9MwEMe9BtgIMDp4QrxYREgDQZVUQ4i3VYDE45DabVISIse9plHtOIpd1sryF+LT8Dre+CY4aSikPcny+Xf/+18iXVqyXCrfvz3oOXfu3js8uu8+ePjo+HH/5MmlFMuKwoQKJqrrlEhgeQETlSsG12UFhKcMrtLFx7p+9R0qmYtirNYlxJxkRT7LKVEWJf1PMvFxpAQm314nQZPJ9q7JsCWbO6JToeS2OG6L46Tv+QO/CbyfBG3ioTYukpOeE00FXXIoFGVEyjDwSxVrUqmcMjButJRQErogGYQ2LQgHGevmdw1+ackUz0RlT6FwQzsd5XSmYBXrrCLlPKerrh/lsAp8O4xLTtS8U9RLJcqcdJkVyjVP92A9XXZpbaiEYHJPrOa8y9KdN2GZqHIr2/3gfxXjutF2ggSl5VzciIKtK5jJN/WDk2JJmCKZrLUF3FDBLZvqCFYlUGXCYRzG2sU2dOOVpvqzSbQXmDBaQFW89QfBO+Ah9oY43pIz4LFpukzXlv71DeL//E6xF+BXO9KR2ShmFVnokTGuXZtgd0n2k8uhHT/48PXMOx+1C3SEnqMX6BQF6D06R1/QBZogin6gn+gW/er9dg6dY6e/kfYO2p6nqBPOsz+AIxUr</latexit>

environment: MDP (S,A,T) reward function: ?

Imitator:

Page 24: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Inverse RLLearn the reward function from expert data

Key assumption: expert data is from the expert policy that maximizes the return

Recall the Bellman equation:V ⇡ = R+ �P⇡V ⇡

<latexit sha1_base64="arBfoeRV+IqgXW0BvuFLgQk/At4=">AAADOXicZZJNixNBEIZ7Z/1Yx6+sHgVpHIQVNcyEFfEgbBDBYxSzuzAzhp5OZdKkP4bujpsw9M1f41Vv/hKP3sSrf8DObFydpGCg+qm33pqmq6g4MzaOv+8Eu5cuX7m6dy28fuPmrdud/TvHRs01hSFVXOnTghjgTMLQMsvhtNJARMHhpJi9WtVPPoI2TMn3dllBLkgp2YRRYj0ade4ff8gqhl/id/gxzkoiBMGDBjWFUSeKu3ETeDtJ1kmE1jEY7Qe72VjRuQBpKSfGpElc2bwm2jLKwYXZ3EBF6IyUkPpUEgEmr5uLOPzQkzGeKO0/aXFDWx3VeGJhkdelJtWU0UXbjwpYJLEfJowgdtoq1nOrKkbazAvNUhRbcDXdtOnK0CrFzZbYTkWbFRtnwkulmZdt/vC/igvD7GKCAVubqTpTki81TMyT1UEQOSfcktKstBLOqPJvJcd1BosKqHVpL0/zOsQ+6sarKOrXblRHiUuzGWj5NO4mz0CkOOrh/IIcgshd0+XatvSvb5L/53eAowQ/2pD23bliosms7jsX+rVJNpdkOznu+fHdF28Po6P+eoH20D30AB2gBD1HR+gNGqAhougT+oy+oK/Bt+BH8DP4dS4NdtY9d1Ergt9/AAU+C4k=</latexit>

V ⇡ = (I � �P⇡)�1R

<latexit sha1_base64="0F7kNQ25a+fCXV8DIxTtDFlPs+0=">AAADOnicZZJNj9MwEIa9WT6W8LFdOHLAIkLqSnSVVIsQB6StEBLcCqLdlZJs5bjT1GpsR7HLtrJ85NdwhRt/hCs3xJUfgNMtC2lHijR+5p13YnmysmBKh+H3HW/32vUbN/du+bfv3L233zq4P1RyXlEYUFnI6iwjCgomYKCZLuCsrIDwrIDTbPaqrp9+hEoxKT7oZQkpJ7lgE0aJdmjUejQ8T0qGX+L2206SE84J7tfk8Nx0Iovfj1pBeBSuAm8n0ToJ0Dr6owNvNxlLOucgNC2IUnEUljo1pNKMFmD9ZK6gJHRGcohdKggHlZrVTSx+4sgYT2TlPqHxijY6yvFEwyI1eUXKKaOLph/lsIhCN4wrTvS0UTRzLUtGmswJ1ZJnW7Cerpq0NtRSFmpLrKe8ybKNMylyWTEn2/zhfxXr+8nVBAXaqKm8kKJYVjBRT+sDJ2JOCk1yVWsFXFDpXkuMTQKLEqi2cTeNU+NjF2bllWXmtR2ZILJxMoNKdMKj6BnwGAddnF6RY+CpXXXZpi396xul//m1cRDhww1pz14qJhWZmZ61vlubaHNJtpNh140/evHuODjprRdoDz1Ej1EbReg5OkFvUB8NEEWf0Gf0BX31vnk/vJ/er0upt7PueYAa4f3+A6K4C7o=</latexit>

Optimality condition: for any other policyP⇡⇤

V ⇡⇤� P⇡V ⇡

<latexit sha1_base64="XQ8m6ECLfI2AePuhCXWK2TQ0pR8=">AAADQXicZZLNjtMwEMe9WT6W8NWFIxeLCLQgqJJq0YrbVgiJY5Fod6UkWznuNI1qx8F2d1tZfgOehivceAoegRviygWnLYW0I1kZ/+Y//3GkySpWKB2G3/e8/WvXb9w8uOXfvnP33v3W4YOBEjNJoU8FE/I8IwpYUUJfF5rBeSWB8IzBWTZ9U9fPLkGqQpQf9KKClJO8LMYFJdqhYetp78IkVXHx3OLBJkty+IhXhRq7z7AVhO1wGXg3idZJgNbRGx56+8lI0BmHUlNGlIqjsNKpIVIXlIH1k5mCitApySF2aUk4qNQsf8jiJ46M8FhId0qNl7TRUY3GGuapySWpJgWdN/0oh3kUumFccaInjaKZaVEVpMmcUC14tgPr6apJa0MtBFM7Yj3hTZZt3QnLhSycbPvB/yrW95PNBAXaqIm4EiVbSBirF/WFk3JGmCa5qrUlXFHBHRuZBOYVUG3jThqnxscuzNIry8xbOzRBZONkCrJ8GbajV8BjHHRwuiHHwFO77LJNW/rXN0r/8zvCQYSfbUm7dqUYSzI1XWt9tzbR9pLsJoOOG99+/f44OO2uF+gAPUKP0RGK0Ak6Re9QD/URRZ/QZ/QFffW+eT+8n96vldTbW/c8RI3wfv8B0rMP9A==</latexit>

So the reward function needs to satisfy

(P⇡⇤� P⇡)(I � �P⇡⇤

)R � 0

<latexit sha1_base64="0/bCk7QV77nNae+Pyyz+PWl2fbI=">AAADTXicZVJNj9MwEPWmwC7hY7tw5GIRIbWIVkm1CHHbCiHBrSC6u1KSrRx3mkaN42C7bCvL/4NfwxVunPkh3BDCTUsh7UiW3rx588aWJynzTCrf/3HgNG7cvHV4dNu9c/fe/ePmyYNzyeeCwpDynIvLhEjIswKGKlM5XJYCCEtyuEhmr1b1i08gZMaLD2pZQsxIWmSTjBJlqVGz1xpc6ajMrp4a3MFrbNqtt50oJYwRvK228XscpfAR+6Om53f9KvA+CDbAQ5sYjE6cRjTmdM6gUDQnUoaBX6pYE6EymoNxo7mEktAZSSG0sCAMZKyrxxn8xDJjPOHCnkLhiq11lOOJgkWsU0HKaUYXdT/KYBH4dhiTjKhprajnipcZqXNWKJcs2SNX02WdXRkqznO5J1ZTVueSnZzkKReZle1e+F/FuG60nSBBaTnl17zIlwIm8tkqYaSYk1yRVK60BVxTbv+sGOsIFiVQZcJeHMbaxTZ05ZUk+rUZaS8wYTQDUXT8bvAcWIi9Ho63zCmw2FRdpm5L//oG8X9+LewFuL0j7Zu1YiLITPeNce3aBLtLsg/Oe3Z89+W7U++sv1mgI/QIPUYtFKAX6Ay9QQM0RBR9Rl/QV/TN+e78dH45v9dS52DT8xDVonH4B+WBEeQ=</latexit>

Page 25: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Inverse RL algorithm

We cannot enumerate all policies, but we can find some

1. start from a random policy2. find a reward function such that

X

s,a2D⇤

R(s, a) �X

s,a⇠⇡

R(s, a)

<latexit sha1_base64="LIBjWGrN6u0ihQcRJdwu3Smu6wo=">AAADUnicZVJNb9NAEN3EfBRTaApHLisspBSVyI6KKm6NAIljQaStZJtovZk4q3h3jXdNE632p/BruMKNC3+FE5s0bXEykqWZ9968t5YmKwumdBj+abW9O3fv3d954D/cffR4r7P/5EzJuqIwpLKQ1UVGFBRMwFAzXcBFWQHhWQHn2eztkj//BpViUnzWixJSTnLBJowS7aBR5zhRNR8ZdUgSJvC7Ly8t/tR10wFOcviKb1nFOE5Kdk2POkHYC1eFt5to3QRoXaej/baXjCWtOQhNC6JUHIWlTg2pNKMFWD+pFZSEzkgOsWsF4aBSs/pDi184ZIwnsnKf0HiFNjbK8UTDPDV5Rcopo/OmH+Uwj0IXxhUnetogTa1lyUgTc0K14NkWuExXTXRpqKUs1JZYT3kTyzZmUuSyYk62+eBbxvp+cpOgQBs1lZdSFIsKJupwOXAialJokqulVsAlldxhY5PAvASqbdxP49T42JVZeWWZeW9HJohsnMygEq/CXvQaeIyDPk5vkCPgqV1t2aYtvfaN0v/8ujiI8MGGdGCvFJOKzMzAWt+dTbR5JNvNWd/F9958PApOBusD2kHP0HPURRE6RifoAzpFQ0TRd/QD/US/2r/bf72W511J2631zlPUKG/3H1Y5EoA=</latexit>

R0 = argmaxR

X

s,a2D⇤

R(s, a) �X

s,a⇠⇡

R(s, a)

<latexit sha1_base64="Ic3JwiPAeyMEhdanc3Y6d9SUpik=">AAADX3icZVLRbtMwFPUaYCPA1sET4sUiQnRoq5JqCPGAtIoh8Tgquk1KQuS4bmrVjiPbYa0s/xBfw+t4409wu24s7ZUi3XvOuec40s0rRpUOw+utlvfg4aPtncf+k6fPdvfa+8/PlaglJkMsmJCXOVKE0ZIMNdWMXFaSIJ4zcpFPPy/4i59EKirK73pekZSjoqRjipF2UNY+HbyFn2CCZJFwNMsGMFE1z4w6RAkt4emPdxYOOm46gEf3KEU5TCp6y2XtIOyGy4KbTbRqArCqs2y/5SUjgWtOSo0ZUiqOwkqnBklNMSPWT2pFKoSnqCCxa0vEiUrN8nctfOOQERwL6b5SwyXa2KhGY01mqSkkqiYUz5p+mJNZFLowrjjSkwZpai0qipqYE6o5zzfARbpqogtDLQRTG2I94U0sX5sRK4SkTrb+4P+M9f3kLkERbdREXImSzSUZq8PFwFFZI6ZRoRbaklxhwR02MgmZVQRrG/fSODU+dGWWXnluvtjMBJGNkymR5VHYjd4THsOgB9M75Jjw1C63bNMW3/pG6T2/DgwieLAm7dsbxViiqelb67uzidaPZLM577n47sdvx8FJf3VAO+AVeA06IAIfwAn4Cs7AEGDwC/wG1+BP66+37e167Rtpa2u18wI0ynv5D8BAFZY=</latexit>

or more conveniently

3. lean a new policy according to R’4. repeat from step 2

X

s,a2D⇤

R(s, a) �X

X

s,a⇠⇡

R(s, a)

<latexit sha1_base64="d/ovP7QxcQ3+YhHLlUDN8/EWpo0=">AAADW3icZZLfb9MwEMe9hh8jDOhAPPFiESFtaFRJNYR4awVIPA5Et0lJqBz3mlq14xA7rJXlf4e/hleQeOB/wc26jbQnRbr73Pe+50iXlZwpHYZ/djrerdt37u7e8+/vPXj4qLv/+FTJuqIwopLL6jwjCjgrYKSZ5nBeVkBExuEsm79b9c++Q6WYLL7oZQmpIHnBpowS7dC4O0hULcZGHZGEFfj915cWfz5w1SFOcviGm25SMnwjU0xgR650424Q9sIm8HYSrZMAreNkvN/xkomktYBCU06UiqOw1KkhlWaUg/WTWkFJ6JzkELu0IAJUappftfiFIxM8lZX7Co0b2pooJ1MNi9TkFSlnjC7aflTAIgrdMqEE0bNW09Raloy0mROqpci24Gq7atOVoZaSqy2xnok2yzZqwnNZMSfbfPBNx/p+cr1BgTZqJi9kwZcVTNXRqhCkqAnXJFcrbQEXVArHJiaBRQlU27ifxqnxsQvTeGWZ+WDHJohsnMyhKl6Fveg1iBgHfZxek2MQqW2mbNuWXvlG6X9+BziI8OGGdGgvFdOKzM3QWt+dTbR5JNvJad+t7739dBwMhusD2kXP0HN0gCL0Bg3QR3SCRoiiH+gn+oV+d/56nud7e5fSzs565glqhff0H304FUA=</latexit>

and consider all generated policies

Page 26: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Inverse RL algorithm

In linear reward function representation

1. start from a random policy2. find a reward function by

3. lean a new policy according to the new reward function w4. repeat from step 2

and consider all generated policies

R = w>�(s)

<latexit sha1_base64="vYD3rap+i9o7qm7OjybjDLBBVbU=">AAADLXicZZJNa9tAEIY3Sj9S9SNOeuxlqSg40BrJpJQeAjGl0GNa6iQgqWa1HtvCu1qxu45llv0rvba3/poeCqXX/o2uFDet7AHB7DPvvDOCyUqWKx2GP3a83Vu379zdu+fff/Dw0X7n4PBciYWkMKSCCXmZEQUsL2Coc83gspRAeMbgIpu/qesXVyBVLoqPelVCysm0yCc5JdqhUefwAz7By0+JFiVOylneVUejThD2wibwdhKtkwCt42x04O0mY0EXHApNGVEqjsJSp4ZInVMG1k8WCkpC52QKsUsLwkGlplne4meOjPFESPcVGje01VGOJxqq1EwlcQvSqu1HOVRR6IZxxYmetYpm4X4rJ23mhGrFsy1YT1dtWhtqIZjaEusZb7Ns403YVMjcyTYX/lexvp/cTFCgjZqJpSjYSsJEPa8fnBQLwjSZqlpbwJIK7tjYJFCVQLWN+2mcGh+7MI1Xlpm3dmSCyMbJHGTxIuxFL4HHOOjj9IYcA09t02XbtvSvb5T+59fFQYSPNqQDe62YSDI3A2t9dzbR5pFsJ+d9N773+v1xcDpYH9AeeoKeoi6K0Ct0it6hMzREFFXoM/qCvnrfvO/eT+/XtdTbWfc8Rq3wfv8ByaMHVw==</latexit>

the return of a trajectory isw>�(s0) + w>�(s1) + . . .+ w>�(sT ) = w>µ

<latexit sha1_base64="rcLTVa5N2PcE4Q7GCptNdfR2Tcw=">AAADZHicZZLfihMxFMazHf+s46pdF68ECQ5Ci1pmyop4IWwRQe9W2O4uzIwlk55phyaTIcnYlpBn8mm88EbvfA7Tbnd12gOBk9/5zncSOFnFCqXD8Odey7t1+87d/Xv+/YMHDx+1Dx+fK1FLCkMqmJCXGVHAihKGutAMLisJhGcMLrLZh1X94htIVYjyTC8rSDmZlEVeUKIdGrU/z78mWlQ4qaZFR43CLn7ZJJEjCRsLrbYrZ138Hl8jXo/aQdgL14F3k2iTBGgTp6PDlpeMBa05lJoyolQchZVODZG6oAysn9QKKkJnZAKxS0vCQaVm/WeLXzgyxrmQ7pQar2mjoxrnGhapmUji3ksXTT/KYRGFbhhXnOhpo2hq96WCNJkTqiXPduBqumrSlaEWgqkdsZ7yJsu27oRNhCycbPvB/yrW95ObCQq0UVMxFyVbSsjVq9WFk7ImTJOJWmlLmFPBHRubBBYVUG3jfhqnxscuzNory8xHOzJBZONkBrJ8HfaiN8BjHPRxekOOgad23WWbtvTaN0r/8+vgIMLdLenAXilySWZmYK3v1ibaXpLd5LzvxvfefTkOTgabBdpHT9Fz1EEReotO0Cd0ioaIou/oB/qFfrf+eAfekffkStra2/QcoUZ4z/4C84YYUA==</latexit>

w = arg maxw

w>(µE � µi)

<latexit sha1_base64="KRZp81VzlcIM/JbWHCcCH++tLao=">AAADP3icZZLNbtNAEMe3Lh/FfKVw5LLCIKUSjeyoCHFAaoQqcSwSaSvZrrXeTJxVdr3W7pokWvkFeBqucOMxeAJuiCs3NmkoOBlp5dnf/Oc/XmnyijNtwvD7jrd74+at23t3/Lv37j942Nl/dKZlrSgMqeRSXeREA2clDA0zHC4qBUTkHM7z6dtl/fwjKM1k+cEsKkgFKUo2ZpQYh7LOsxl+gxOiikSQeTbDs8vEyKqbiDo7wYfYfS/ZQdYJwl64CrydROskQOs4zfa93WQkaS2gNJQTreMorExqiTKMcmj8pNZQETolBcQuLYkAndrVcxr83JERHkvlTmnwirY6qtHYwDy1hSLVhNF5248KmEehGya0IGbSKtravY6RNnNCvRD5FlxO1226NDRScr0lNhPRZvnGnfBCKuZkmz/8r9L4fnI9QYOxeiJnsuQLBWP9YnkRpKwJN6TQS20JMyqFYyObwLwCapq4n8ap9bELu/LKc3vSZDaImjiZgioPw170EkSMgz5Or8kRiLRZdTVtW/rXN0r/8+viIMIHG9JBc6UYKzK1g6bx3dpEm0uynZz13fje6/dHwfFgvUB76Al6irooQq/QMXqHTtEQUfQJfUZf0Ffvm/fD++n9upJ6O+uex6gV3u8/rt4Ogg==</latexit>

w = argmaxw

mini

w>(µE � µi)

<latexit sha1_base64="LLZkSiwMVzyc5qaT49a8MSEIFQ0=">AAADRnicZZLdbtMwFMe9lI8RPtbBJTcWEVIn2JRUQ4gLpFVoEpdDotukJIsc9zS1aseR7dBWkR+Cp+EW7ngFXoI7xC1uVwZpj2Tp+Hf+539s6eQVZ9qE4Y8dr3Pr9p27u/f8+w8ePtrr7j8+17JWFIZUcqkuc6KBsxKGhhkOl5UCInIOF/n03bJ+8QmUZrL8aBYVpIIUJRszSoxDWffFDL/FCVFFIsg8m+FEsDJjeHaVGFn1ElFnp/jQ0fqKHWTdIDwKV4G3k2idBGgdZ9m+10lGktYCSkM50TqOwsqkDVGGUQ7WT2oNFaFTUkDs0pII0Gmz+pXFzx0Z4bFU7pQGr2iroxqNDczTplCkmjA6b/tRAfModMOEFsRMWsWmdr9jpM2cUC9EvgWX03WbLg2NlFxvic1EtFm+cSe8kIo52eaD/1Ws7yc3EzSYRk/kTJZ8oWCsXy4vgpQ14YYUeqktYUalcGzUJDCvgBob99M4bXzsoll55XlzarMmiGycTEGVh+FR9ApEjIM+Tm/IMYjUrrps25b+9Y3S//x6OIjwwYZ0YK8VY0WmzcBa361NtLkk28l5340/evPhODgZrBdoFz1Fz1APReg1OkHv0RkaIoo+oy/oK/rmffd+er+839dSb2fd8wS1ooP+APoDEFE=</latexit>

[Pieter Abbeel, Andrew Y. Ng: Apprenticeship learning via inverse reinforcement learning. ICML 2004]

Page 27: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

KL divergence:

Total Variation:

JS divergence:

Earth-Mover distance:

...

Copy distributionWe want our policy generate state distribution close to the expert data

Distribution similarity measures

DKL(PrkPg) =X

x2X

Pr(x) logPr(x)

Pg(x)

<latexit sha1_base64="OYrTmiBgJP4D2qfOL+TrowutXMs=">AAACK3icbVDLSgMxFM3UV62vqks3wSK0mzIjBXUhlOpC0EUF+4BOGTJpZhqayQxJRlrG+R83/ooLXfjArf9h+lho64FwT865l+QeN2JUKtP8MDJLyyura9n13Mbm1vZOfnevKcNYYNLAIQtF20WSMMpJQ1HFSDsSBAUuIy13cDH2W/dESBryOzWKSDdAPqcexUhpycnXLp3k+iYt1h0B7QdYd/wSPIe2jAMnGdqUw3aqRVEclmwW+rYnEE6m91RXf1ydfMEsmxPARWLNSAHMUHfyL3YvxHFAuMIMSdmxzEh1EyQUxYykOTuWJEJ4gHzS0ZSjgMhuMtk1hUda6UEvFPpwBSfq74kEBVKOAld3Bkj15bw3Fv/zOrHyTrsJ5VGsCMfTh7yYQRXCcXCwRwXBio00QVhQ/VeI+0jnoXS8OR2CNb/yImkel61K+ey2UqjWZnFkwQE4BEVggRNQBVegDhoAg0fwDN7Au/FkvBqfxte0NWPMZvbBHxjfP43Gph4=</latexit>

DJS = DKL(PrkPg) + DKL(PgkPr)

<latexit sha1_base64="kddDs8CevXyVTE2yLivaMZYnQ7k=">AAACGXicbZDLSsNAFIYnXmu9RV26GSxCi1ASKagLoagLURcR7QXaECbTaTp0cmFmIpSY13Djq7hxoYhLXfk2TtuA2vrDwM93zuHM+d2IUSEN40ubmZ2bX1jMLeWXV1bX1vWNzboIY45JDYcs5E0XCcJoQGqSSkaaESfIdxlpuP3TYb1xR7igYXArBxGxfeQFtEsxkgo5unHmJBc3KTyGylxepUXL4bB9Dy3HK8G9H+iNIS85esEoGyPBaWNmpgAyWY7+0e6EOPZJIDFDQrRMI5J2grikmJE0344FiRDuI4+0lA2QT4SdjC5L4a4iHdgNuXqBhCP6eyJBvhAD31WdPpI9MVkbwv9qrVh2D+2EBlEsSYDHi7oxgzKEw5hgh3KCJRsogzCn6q8Q9xBHWKow8yoEc/LkaVPfL5uV8tF1pVA9yeLIgW2wA4rABAegCs6BBWoAgwfwBF7Aq/aoPWtv2vu4dUbLZrbAH2mf3z9gnVs=</latexit>

DTV = supx2X

|Pr(x)� Pg(x)|

<latexit sha1_base64="Ejwxa3lBLiXCvkVgwlgX1c5XkhI=">AAACEXicbZDLSsNAFIYnXmu9RV26GSxCXVgSKagLoagLlxF6g6aEyXTaDp1MwsxEWtK8ghtfxY0LRdy6c+fbOG2z0NYfBj7+cw5nzu9HjEplWd/G0vLK6tp6biO/ubW9s2vu7ddlGAtMajhkoWj6SBJGOakpqhhpRoKgwGek4Q9uJvXGAxGShryqRhFpB6jHaZdipLTlmcVbL6nWU3gFXRlHXjJ0KYfNFI4dTxSHJ/AUOl5Pw9gzC1bJmgougp1BAWRyPPPL7YQ4DghXmCEpW7YVqXaChKKYkTTvxpJECA9Qj7Q0chQQ2U6mF6XwWDsd2A2FflzBqft7IkGBlKPA150BUn05X5uY/9VasepetBPKo1gRjmeLujGDKoSTeGCHCoIVG2lAWFD9V4j7SCCsdIh5HYI9f/Ii1M9Kdrl0eV8uVK6zOHLgEByBIrDBOaiAO+CAGsDgETyDV/BmPBkvxrvxMWtdMrKZA/BHxucPcwWbhQ==</latexit>

DW = inf�2⇧(Pr,Pg)

E(x,y)⇠�kx� yk

<latexit sha1_base64="phc78DClPwNdXMnhT1tt9ENL11Y=">AAACLHicbVBLSwMxGMz6rPVV9eglWIQKtexKQT0I4gM8rmCt0C1LNs22oUl2SbLSZbs/yIt/RRAPinj1d5g+Dr4GApOZ+Ui+CWJGlbbtN2tmdm5+YbGwVFxeWV1bL21s3qookZg0cMQieRcgRRgVpKGpZuQulgTxgJFm0D8f+c17IhWNxI1OY9LmqCtoSDHSRvJL5xd+1szhCfSoCP3M6yLO0egCPZdWXF9WXb+7l8NLP6sMqumepyiHk1QOveFgP/WGfqls1+wx4F/iTEkZTOH6pWevE+GEE6ExQ0q1HDvW7QxJTTEjedFLFIkR7qMuaRkqECeqnY2XzeGuUTowjKQ5QsOx+n0iQ1yplAcmyZHuqd/eSPzPayU6PGpnVMSJJgJPHgoTBnUER83BDpUEa5YagrCk5q8Q95BEWJt+i6YE5/fKf8ntQc2p146v6+XTs2kdBbANdkAFOOAQnIIr4IIGwOABPIFX8GY9Wi/Wu/Uxic5Y05kt8APW5xdoSqaL</latexit>

Page 28: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Match distributions

We want the imitator data distribution Pg = expert data distribution Pr

the goal is:

Use JS divergence:

But we only have data sets, but not distributions

Pr(x)

Pr(x) + Pg(x)=

Pg(x)

Pr(x) + Pg(x)

<latexit sha1_base64="lFQeVV9R+61GXjmH71BuPEsMUuU=">AAACJHicbVDLSgMxFM34rPU16tJNsAgVocxIQUWEohuXI9gHtMOQSTNtaCYzJBmxDP0YN/6KGxc+cOHGbzHTjqBtD4ScnHMvN/f4MaNSWdaXsbC4tLyyWlgrrm9sbm2bO7sNGSUCkzqOWCRaPpKEUU7qiipGWrEgKPQZafqD68xv3hMhacTv1DAmboh6nAYUI6Ulz7zoBALh1PFE+eFolN/HjtfLnvAS/tq9ObZnlqyKNQacJXZOSiCH45nvnW6Ek5BwhRmSsm1bsXJTJBTFjIyKnUSSGOEB6pG2phyFRLrpeMkRPNRKFwaR0IcrOFb/dqQolHIY+royRKovp71MnOe1ExWcuSnlcaIIx5NBQcKgimCWGOxSQbBiQ00QFlT/FeI+0rEonWtRh2BPrzxLGicVu1o5v62Wald5HAWwDw5AGdjgFNTADXBAHWDwCJ7BK3gznowX48P4nJQuGHnPHvgH4/sHudOjAw==</latexit>

DJS(Pr, Pg) =

Zlog

� Pr(x)

Pr(x) + Pg(x)

�Pr(x) dµ(x)

+

Zlog

� Pg(x)

Pr(x) + Pg(x)

�Pg(x) dµ(x)

<latexit sha1_base64="ZKuySlz/ljbumls8oCQVN1AkEd4=">AAACnHicfVFba9swFJa9XrKsl2x9GoVyWGhJaAn2KLR7KAtrH3qhkNEmLUTByIrsiMoXJHksGP+q/pO+7d9UTvywXg8IffrOd47OxU8FV9px/ln2h4XFpeXax/qnldW19cbnLwOVZJKyPk1EIm99opjgMetrrgW7TSUjkS/YjX93XPpv/jCpeBJf62nKRhEJYx5wSrShvMb9iZefXxWtnif3oOeFbTiCHcA81oBFEgL2edjCgSQ0N5LW33ZR3btGXD5LQRvmHGDAEdETGeXjwsBsxuH6Duy+lTJ8J2X4Rkqv0XQ6zszgJXAr0ESV9bzGAx4nNItYrKkgSg1dJ9WjnEjNqWBFHWeKpYTekZANDYxJxNQonw23gG3DjCFIpDmmhRn7f0ROIqWmkW+UZaHqua8kX/MNMx0cjnIep5lmMZ1/FGQCdALlpmDMJaNaTA0gVHJTK9AJMWPTZp91MwT3ecsvweB7x93v/Pi93+z+qsZRQ5voG2ohFx2gLjpFPdRH1Ppq/bROrTN7yz6xL+zLudS2qpgN9MTswSPWS8V2</latexit>

Page 29: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Approximately match distributions

then we have an approximated distribution)()(

)()(xPxP

xPxD

gr

r

+=

But we only have data sets, but not distributionslearn a distribution from data

Employ a classifier D to discriminate the two data setsPg Pr

class 0 class 1D

Page 30: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Approximately match distributions

then we have an approximated distribution)()(

)()(xPxP

xPxD

gr

r

+=

But we only have data sets, but not distributionslearn a distribution from data

Employ a classifier D to discriminate the two data setsPg Pr

class 0 class 1D

objective for minimize the JS divergence:

[Goodfellow et al., NIPS’2014]which is the objective of GAN

Ex⇠Pr [logD(x)] + Ex⇠Pg [log(1 � D(x))]

<latexit sha1_base64="DTfa4TVMbwVjF4vNiYcfI1i2aQ8=">AAACIXicbZDLSsNAFIYn9VbrLerSzWARWsSSSMG6K17AZQV7gTSEyXTSDp1cmJlIS+iruPFV3LhQpDvxZZy0WdTWAwM//3cOZ87vRowKaRjfWm5tfWNzK79d2Nnd2z/QD49aIow5Jk0cspB3XCQIowFpSioZ6UScIN9lpO0Ob1PefiZc0DB4kuOI2D7qB9SjGEllOXrt3klGXUF92HD4BFpdFvbhXWlUtuE5XGD9jJXMi5SWbUcvGhVjVnBVmJkogqwajj7t9kIc+ySQmCEhLNOIpJ0gLilmZFLoxoJECA9Rn1hKBsgnwk5mF07gmXJ60Au5eoGEM3dxIkG+EGPfVZ0+kgOxzFLzP2bF0qvZCQ2iWJIAzxd5MYMyhGlcsEc5wZKNlUCYU/VXiAeIIyxVqAUVgrl88qpoXVbMauX6sVqs32Rx5MEJOAUlYIIrUAcPoAGaAIMX8AY+wKf2qr1rX9p03prTsplj8Ke0n1880aEQ</latexit>

Page 31: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Generative Adversarial Networks

[Goodfellow et al., NIPS’2014]

Page 32: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Recent advances

https://github.com/hindupuravinash/the-gan-zooMany variants

NIPS 2018https://arxiv.org/abs/1711.10337

Mode collapse problem

Page 33: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

GAN with RL generator

Page 34: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

GAN with RL generator

reward

RL

Page 35: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Generative Adversarial Imitation Learning

[Jonathan Ho, Stefano Ermon: Generative Adversarial Imitation Learning. NIPS’2016, 4565-4573]

(GAIL)

Loop:

maxD

E(s,a)⇠⇡E[log(D(s, a))] + E(s,a)⇠⇡[log(1 � D(s, a))]

<latexit sha1_base64="me4dTZdKDfSxq54weFp1IgcxsOA=">AAADanicZZJda9swFIbVeB+d95V2V6M3Yt4gYW2wQ8fYXcMa2N06WNqCbYysnDgikmUsZU0Q+ln7MWO3291+xBw3zebkgODoOe95jwQnLThT2vd/7LWce/cfPNx/5D5+8vTZ8/bB4aWS85LCiEouy+uUKOAsh5FmmsN1UQIRKYerdPZxVb/6BqViMv+qlwXEgmQ5mzBKdIWS9udIkEVyjoeJ6ahj0o0UEzgqWDK0OIy4zDrnNe/G+O2O6E4SnGxESdvze34deDcJ1omH1nGRHLScaCzpXECuKSdKhYFf6NiQUjPKwbrRXEFB6IxkEFZpTgSo2NQ/t/hNRcZ4Isvq5BrXtNFRjCcaFrHJSlJMGV00/aiAReBXw4QSRE8bRTPXsmCkySqhWop0B66mqyZdGWopudoR66losnTrTngmS1bJth/8r2JdN9pMUKCNmsobmfNlCRN1vLoIks8J1yRTK20ON1SKio1NBIsCqLZhPw5j4+IqTO2VpmZoE+MFNoxmUOYnfi94ByLEXh/HG3IKIrZ1l23a0jvfIP7Pr4O9AHe3pAN7q5iUZGYG1rrV2gTbS7KbXPar8b0PX069s8F6gfbREXqFOihA79EZ+oQu0AhR9B39RL/Q79Yf59B56RzdSlt7654XqBHO67/M/Bhh</latexit>

1. learn the discriminator

Start from a random policy

2. learn the policy using reward function

r(s, a) = log(D(s, a))

<latexit sha1_base64="u02agMFpDTSuovItcpJQf/6Buk0=">AAADMHicZZJNb9NAEIa3Lh/FfDQFceKywkJKpBLZUVHFAakRIHEsEmkr2Va03kwcK7tea3fdJlrtj+EKN34NnBBXfgUbNxScjLTSzDPvvLOHySpWKB2GP3a83Vu379zdu+fff/Dw0X7n4PGZErWkMKKCCXmREQWsKGGkC83gopJAeMbgPJu/XfXPL0GqQpSf9LKClJO8LKYFJdqhceep7KpD0sNvcMJE3n3XVL1xJwj7YRN4O4nWSYDWcTo+8HaTiaA1h1JTRpSKo7DSqSFSF5SB9ZNaQUXonOQQu7QkHFRqmv9b/MKRCZ4K6V6pcUNbE9VkqmGRmlySalbQRduPclhEoVvGFSd61mqaWouqIG3mhGrJsy242q7adGWohWBqS6xnvM2yjZqwXMjCyTY//K9jfT+52aBAGzUTV6JkSwlTdbgqOClrwjTJ1UpbwhUV3LGJSWBRAdU2HqRxanzswjReWWbe27EJIhsnc5Dly7AfvQIe42CA0xtyBDy1zZRt29K/vlH6n18XBxHubUiH9loxlWRuhtb67myizSPZTs4Gbn3/9cej4GS4PqA99Aw9R10UoWN0gj6gUzRCFBn0GX1BX71v3nfvp/frWurtrGeeoFZ4v/8AHRwHZQ==</latexit>

Until convergence

Page 36: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Connections between IRL & GAIL

IRL: iterates between reward function and policy

GAIL: iterates between discriminator and policy

)],([E)]),([Ε)(min(max ascascHECc

πππ

π −+−Π∈∈

))]|(log([E)( saH ππ π −=

)))]((1[log(E))]([log(Emaxmin ~~ zGDxDgr pzpx

DG−+

the discriminator (on trajectories) as the reward function for inverse reinforcement learning.

[Justin Fu, Katie Luo and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. abs/1611.03852]

[Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. abs/1611.03852]

and better disentangled reward

Page 37: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Various settings of imitation learning

RL problem: <S,A,R,P>Observation: <S’,A’,R’,P’>

Simplest: S=S’,A=A’,P=P’Hardest: S≠S’,A≠A’,P≠P’

Environment reward accessible: R ?

internal dataobservational data

Can practice: P ?No — only from demonstration dataYes — try in the environment

No — simulate the expertYes — maximize the reward

Page 38: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Applications

[Abbeel et al., IJRR'2010][Finn et al., ICML'2016] [Sliver et al., RSS'2008]

• Initialize policy• Learn without expressing reward function •Simulate environments

Page 39: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

An example in real application

recommendation in E-commerce

prediction: this looks bad

prediction: this sells good sold more

sold less

Page 40: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

An example in real application

recommendation in E-commerce

prediction: this looks bad

prediction: this sells good sold more

sold less

decision: recommend more

decision: recommend less

Page 41: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Experimental study

Initially: simplified simulator

[Y. Hu, Q. Da, A. Zeng, Y. Yu and Y. Xu. Reinforcement learning to rank in e-commerce search engine: Formalization, analysis, and application. KDD 2018.]

Page 42: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Experimental study

Initially: simplified simulator

[Y. Hu, Q. Da, A. Zeng, Y. Yu and Y. Xu. Reinforcement learning to rank in e-commerce search engine: Formalization, analysis, and application. KDD 2018.]

five online learning-to-rank algorithms DDPG with full backup estimation of Q

Experiment results

Page 43: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Simulate Taobao !

buyers Taobao platform

real-world Taobao

Page 44: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

How to simulate humans?

Learning buyers’ actions ?

buyers data: observations -> actions

features labels

supervised learning ?

Page 45: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

How to simulate humans?

Learning buyers’ actions ?

buyers data: observations -> actions

features labels

supervised learning ?

buyers Taobao platform

Page 46: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Idea: Motivated as buyers

learning reward function from observations via inverse reinforcement learning

1. buyers’ motivation may keep unchanged in different platforms

[J.-C. Shi, Y. Yu, Q. Da, S.-Y. Chen, and A.-X. Zeng. Virtual-Taobao: Virtualizing real-world online retail environment for reinforcement learning. AAAI’19]

Page 47: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Idea: Motivated as buyers

learning reward function from observations via inverse reinforcement learning

1. buyers’ motivation may keep unchanged in different platforms

[J.-C. Shi, Y. Yu, Q. Da, S.-Y. Chen, and A.-X. Zeng. Virtual-Taobao: Virtualizing real-world online retail environment for reinforcement learning. AAAI’19]

2. practice-to-learn by the agent in various situations

Page 48: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Idea: Motivated as buyers

learning reward function from observations via inverse reinforcement learning

1. buyers’ motivation may keep unchanged in different platforms

[J.-C. Shi, Y. Yu, Q. Da, S.-Y. Chen, and A.-X. Zeng. Virtual-Taobao: Virtualizing real-world online retail environment for reinforcement learning. AAAI’19]

2. practice-to-learn by the agent in various situations

resp

onse

action

reward function space

resp

onse

action

learn practice

Page 49: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Generalization

IRL vs BC

0 day 1 day 1 week 1 month-10%

0%

10%

20%

30%

Impr

. of R

L to

Ran

dom

VTaobao by BCVTaobao by MAIL

[J.-C. Shi, Y. Yu, Q. Da, S.-Y. Chen, and A.-X. Zeng. Virtual-Taobao: Virtualizing real-world online retail environment for reinforcement learning. AAAI’19]

Page 50: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

Virtual vs real

M F U0

0.5

1

Gen

der P

ropo

rtion Ground-Truth

Virtual Taobao

1 2 3 4 5 6 7 80

0.2

0.4

0.6

Cat

egor

y Pr

opor

tion

Ground-TruthVirtual Taobao

1 2 30

0.5

1

1.5

Pow

er P

ropo

rtion Ground-Truth

Virtual Taobao

F T0

0.5

1

1.5

Leve

l Pro

porti

on

Ground-TruthVirtual Taobao

M F U0

0.05

0.1

0.15

R2P

ove

r Gen

der Ground-Truth

Virtual Taobao

1 2 3 4 5 6 7 80

0.1

0.2R

2P o

ver C

ateg

ory

Ground-TruthVirtual Taobao

1 2 30

0.05

0.1

0.15

R2P

ove

r Pow

er Ground-TruthVirtual Taobao

T F0

0.05

0.1

0.15

R2P

ove

r Lev

el

Ground-TruthVirtual Taobao

00:00 06:00 12:00 18:00 24:000.06

0.07

0.08

0.09

0.1

R2P

Ground-TruthVirtual Taobao

[J.-C. Shi, Y. Yu, Q. Da, S.-Y. Chen, and A.-X. Zeng. Virtual-Taobao: Virtualizing real-world online retail environment for reinforcement learning. AAAI’19]

Page 51: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

RL from Virtual Taobao

00:00 06:00 12:00 18:00 24:000%

1%

2%

3%

4%

Turn

over

Gap

RL v.s SL1RL v.s. SL2

00:00 06:00 12:00 18:00 24:000%

2%

4%

6%

8%

Volu

me

Gap

RL v.s SL1RL v.s. SL2

Train RL in Virtual Taobaowith conservative constraint

• do not go far away from SL model

• 30% increase of GMV initially

Improve online performance with no cost

[J.-C. Shi, Y. Yu, Q. Da, S.-Y. Chen, and A.-X. Zeng. Virtual-Taobao: Virtualizing real-world online retail environment for reinforcement learning. AAAI’19]

Page 52: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

For academic purpose

VirtualTaobao simulator: https://agit.ai/Polixir/VirtualTaobao

• VirtualTaobao simulator provides a "live" environment just like the real Taobao• anyone can test new recommendation algorithms interactively in their own laptops• much more realistic than static data sets

[J.-C. Shi, Y. Yu, Q. Da, S.-Y. Chen, and A.-X. Zeng. Virtual-Taobao: Virtualizing real-world online retail environment for reinforcement learning. AAAI’19]

A new simulator is on the way !

Page 53: Imitation Learning · 2020. 9. 8. · Reinforcement Learning China Summer School Imitation Learning Yang Yu Nanjing Univeristy Aug. 1, 2020

[email protected] http://lamda.nju.edu.cn/yuy

Thank you !