how would one test an AI device? not in the Turing sense (indistinguishable from a human), but rather making sure it behaves consistently within certain "moral" boundaries.
I do not see "limitations" and "boundaries" whatever they are as relevant to the question.
PS
I do not see "limitations" and "boundaries" whatever they are as relevant to the question.