1. Ethics is about the character and quality of our decisions that shape the world around us. It requires people to accept responsibility for our choices and their consequences.
2. Artificial Intelligence is the imitation of human-like intelligence by the non-human abstract agency. Although primarily we assign this term to the computer system only, I think there is merit in seeing it as a broader system that involves humans and machines together. Upon adopting this more expansive concept, you will realize that AI can be a company too! Just think about Google, Facebook, and other companies as a form of AI. Do you think that's plausible?
3. Technology (AI or otherwise) is like an amplifier. It's a mirror of our ethical stance. The only difference is that it is highly scalable and fast. The primary issue with that is, rollback is extremely difficult.
4. We tolerate many unethical elements in our society today, but that is non-sequitur. It would be absurd to amplify them with AI. Just because a few friends and family like my (bad) singing doesn't mean I can go and start having concerts.
5. The problem with the AI Ethics Framework is that it is a framework. Think of ASIC, for instance, they may tell you how business must be structured et al. with the help of company laws, but they leave the rest to the businesses to figure out. AI Ethics framework can follow the same path and define no-go zones or boundaries.
6. Governments can be (and are) slow. Sometimes corporations that lobby to twist the law in their favor make them slower. But, mostly, the government is much like a dysfunctional family; we adjust and improve, don't leave. There is no need to overly rely on corporations to take up that role and define the ethical framework as such - there is so much conflict of interest.
7. One of the ethical concerns is junk AI solutions that have inherent problems and flaws. They can discriminate and annoy people significantly, cause societal challenges at scale; typically, the vulnerable class gets affected. Ethics come into play here, because if you know that the solution is not ready or has flaws, why won't you fix it before release. Greed?
8. Doing the right thing is more important than doing something right. AI systems may help in the latter part. The critical question is - who defines what is right?
9. The current generation is already blaming and calling out the previous generation for mass exploitation of planet and resources that accelerated since the industrial era. Are we perpetuating that with lousy and unethical AI solutions?
10. Cross border implications are one of the significant concerns in AI ethics. If your data or software comes from another place on the planet, which framework will it follow?
11. Corporations can, of course, contribute and do their part by implementing ethically stringent policies on their technology use. But first, they need to begin with some introspection, as the tech will only amplify their values, whatever those values might be.
12. Significant challenges with AI will be from early majority users. Currently, we have early adopters, and it may seem like not that big problem. However, remember what happened with the internet and then with social media. When it became mainstream, and almost anyone had access to it, people started doing questionable (and terrible) things using it. Right from mass polarization to terrorist recruitments to deep-fakes, things continue. Unless there are controls baked into technology that enforcement agencies can use, it is going to cause havoc.