As Artificial Intelligence is solving increasingly hard problems, it’s becoming more and more complex. This complexity leads to an often overlooked issue: the lack of transparency. This is problematic, because by taking answers at face value from an uninterpretable model (a black box), we’re trading accuracy for transparency. This is bad for a couple of reasons: Debugging. While it may be possible to figure out what’s wrong with a car just by hearing it squealing and whirring, opening up the engine lid and inspecting everything is way more efficient.
Software being more and more used to get metrics and insights for critical areas of our societies such as our healthcare system, crime recidivism risk assessment, job application review or loan approval, the question of algorithms fairness is becoming more important than ever. As algorithms learn from human-generated data, they often magnify human bias in decision making, making them prone to judging something in an unfair way. For example, the Amazon CV review program was found to be unfair to women.
With its recent gain in popularity, a lot of things have been called “Artificial Intelligence”. But what is it anyway? According to Wikipedia, it’s “intelligence demonstrated by machines”, but does such a thing exist? At time of writing, they are 4 main types of AI development algorithms. Expert systems defines a category of computer programs that are specifically designed to do a task using prior human knowledge. Software engineers work closely with a domain expert to build the program, that will act in a predicable way, like the domain expert would have done if he or she had the same processing power.