problem creation in the financial system
Jan. 6th, 2009 10:54 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
1. The End of the Financial World as We Know It
2. How to Repair a Broken Financial World
via
avva
a good example of how problems are created by connecting people and roles within the financial system.
also see: http://watertank.livejournal.com/1137990.html
innovation brokerage business model is prone to this type of problems.
on a somewhat related topic: how would one test an AI device? not in the Turing sense (indistinguishable from a human), but rather making sure it behaves consistently within certain "moral" boundaries. Asimov's robot laws or Clarke's Hal?
2. How to Repair a Broken Financial World
via
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
a good example of how problems are created by connecting people and roles within the financial system.
also see: http://watertank.livejournal.com/1137990.html
innovation brokerage business model is prone to this type of problems.
on a somewhat related topic: how would one test an AI device? not in the Turing sense (indistinguishable from a human), but rather making sure it behaves consistently within certain "moral" boundaries. Asimov's robot laws or Clarke's Hal?
Re: PS
Date: 2009-01-07 08:45 pm (UTC)but how would you know if it is a "team player", i.e. capable of working productively within group of people and other AI devices? people developed a whole set of social norms to deal with conflict situations and etc. does it mean that we'll have to teach AI something beyond a specific functionality.
Re: PS
Date: 2009-01-08 02:17 am (UTC)I see these as tools and not norms. Thus I see the apparent "norms" as derived from the optimality of the behaviour. When conditions are static, these choces, which are initially rational, may indeed become rigid norms. We could see this in the Chinese history, when a very rational work of Confucius became a set of norms. But this does not negate the initial forgotten rationality of these norms. Moreover, from Chinese history of 19th-20th century, we can see how that rigidization of rational choices in changed circumstances resulted in tons of suffering. I certainly prefer the Japanese way, when on the second, stronger arrival of Europeans (well, this time Americans) they decided to be rational and not traditional (normative).
Concerning AI, I expect it to look like it follows norms while being only rational.
I have talked about norms vs. rationality in Russian (http://vi-z.livejournal.com/117298.html) some time ago.
no subject
Date: 2009-01-08 04:55 am (UTC)On the other hand, if the transaction is complex, e.g. involves different types interactions, objects, and stretches over a long period of time, norms and values become much more important than a one-time "rationality".
no subject
Date: 2009-01-10 09:23 am (UTC)If "rational behaviour" is meant to be "value-directed", then what you state here is known as relativistic ethical fallacy.
It all depends on the complexity of the transaction. The simpler the transaction ( market oriented culture in your terminology),
Objects like atoms are simple, but they are sufficient to form everything on Earth, including me and you. Similar things happen in the case of market transactions. They are sufficient to build a modern airplane or latest sports car. A worker screwing a screw on a car assemply line is profiting from the difference in market price of a screw and underassembled car taken together and the car with the screw in the proper place. Certainly finding such profitable actions and processes is not trivial, and this is why specialists in the area called enterpreners are needed.
On the other side, Soviet Union was a huge reactionary attempt to build modern society on the notion of sensenless absolute norms. In my opinion its failure shows that losing meaning (proper value balance) is especially destructive in complicated transactions.
On the other hand, if the transaction is complex, e.g. involves different types interactions, objects, and stretches over a long period of time, norms and values become much more important than a one-time "rationality".
I almost agree with you. Artificial intelligence IS about doing actions maximizing value according to some valuation system. This even has its technical name -- Reinforcement Learning. And rationality is nothing but being consistent in following one's values. Except I do not understand your mixing up values and norms. I do not see any better definition for norms than being rigid rules of action for which value-derived justification is forgotten and most often is no longer true. Like not eating pork or beef in times when refrigerators and rapid delivery are widely available.