Wednesday, June 01, 2016

As AI systems make more decisions, we need Libre Software now more than ever

I have been using AI technology on projects since the 1980's (first using mostly symbolic AI, then neural networks and machine learning) and in addition to the exponentially growing progress the other thing that strikes me is how a once small AI developers community has grown by perhaps almost three orders of magnitude in the number of people working in the field. As the new conventional wisdom goes AI services will be like cloud computing services and power: ubiquitous.

As AI systems decide what medical care people get, who gets mortgages and at what interest rates, the ranking of employees in large organizations, nation states automatically determining who is a threat to their power base or public safety, control of driverless cars, maintain detailed information on everyone and drive their purchasing decisions, etc., having some transparency in algorithms and software implementation is crucial.

Notice how I put "Libre Software" in the title, not "Open Source." While business friendly permissive licenses like Apache 2 and MIT are appropriate for many types of projects, Libre Software licenses like the GPL3 and AGPL3 will ensure that the public commons of AI algorithms and software implementation stays open and transparent.

What about corporations maintaining their proprietary intellectual property for AI? I am sensitive to this issue but I still argue that the combination of a commons of Libre open source AI software with proprietary server infrastructure and proprietary data sources should be sufficient to protect corporations' investments and competitive advantages.

2 comments:

grant rettke said...

Innocent question: what are some job titles for the folks who would perform that kind of assessment of algorithms? Would it be a government position like a health inspector?

Mark Watson, author and consultant said...

Hello Grant,

I meant that just having the bulk of AI software Libre software would increase transparency because the code would be public. I didn't intend that there would be a government bureaucracy required.

In much the same way security experts publicly write about possible security exploits, AI experts could publicly critique software that is making decisions that affects many peoples' lives.