The assumption is that these AI thingywhoppers (droids?) will begin thinking on their own. Not only will they be able to compute, search, disaggregate, and decide much faster than us humans, they can do so more effectively, more efficiently. Doubt it? jWhen you start shopping for something online, you get some nifty little "suggestions" to one side of your screen about other things you might like to consider buying. Those suggestions are NOT coming from a human; it's AI at work.
What happens, however, when these powerful AI solution crafters start making decisions and feeding us "answers" to our problems that are completely void of compassion, of judgement, of heart?
Then we'll get automated solutions to complex problems that have some of these features:
- No concern for the human collateral damage of the decisions.
- Solutions driven solely by data, ignoring contextual elements.
- Decisions completely devoid of values.
- Purely transactional processes, disregarding the contributions of those who do the work.
- Zero accountability for the decision maker (in this case, AI).
- No interest in the development of others on the team (they're only humans, you know).
I've already worked with a few boss-humans that think and act like that. Heartless. Don't wanna do that anymore.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.