Verifiably safe off-model reinforcement learning

@INPROCEEDINGS{DBLP:conf/tacas/FultonP19,
	pdf = {pub/vpmu.pdf},

  author    = {Nathan Fulton and
               Andr{\'{e}} Platzer},
  title     = {Verifiably Safe Off-Model Reinforcement Learning},
  booktitle = {TACAS},
  year      = {2019},
  pages     = {413-430},
  doi       = {10.1007/978-3-030-17462-0_28},
  editor    = {Tomas Vojnar and
               Lijun Zhang},
  longbooktitle = {Tools and Algorithms for the Construction
               and Analysis of Systems, TACAS 2019, Part {I}},
  publisher = {Springer},
  series    = {LNCS},
  volume    = {11427},
  address   = {},
  isbn      = {},
}```