--- res: bibo_abstract: - Partially observable Markov decision processes (POMDPs) are widely used in probabilistic planning problems in which an agent interacts with an environment using noisy and imprecise sensors. We study a setting in which the sensors are only partially defined and the goal is to synthesize “weakest” additional sensors, such that in the resulting POMDP, there is a small-memory policy for the agent that almost-surely (with probability 1) satisfies a reachability objective. We show that the problem is NP-complete, and present a symbolic algorithm by encoding the problem into SAT instances. We illustrate trade-offs between the amount of memory of the policy and the number of additional sensors on a simple example. We have implemented our approach and consider three classical POMDP examples from the literature, and show that in all the examples the number of sensors can be significantly decreased (as compared to the existing solutions in the literature) without increasing the complexity of the policies.@eng bibo_authorlist: - foaf_Person: foaf_givenName: Krishnendu foaf_name: Chatterjee, Krishnendu foaf_surname: Chatterjee foaf_workInfoHomepage: http://www.librecat.org/personId=2E5DCA20-F248-11E8-B48F-1D18A9856A87 orcid: 0000-0002-4561-241X - foaf_Person: foaf_givenName: Martin foaf_name: Chemlík, Martin foaf_surname: Chemlík - foaf_Person: foaf_givenName: Ufuk foaf_name: Topcu, Ufuk foaf_surname: Topcu bibo_volume: 2018 dct_date: 2018^xs_gYear dct_identifier: - UT:000492986200006 dct_language: eng dct_publisher: AAAI Press@ dct_title: Sensor synthesis for POMDPs with reachability objectives@ ...