AI systems 'learn' by mining through countless examples (data) to discern patterns and consequently make decisions. At a base level, much like a baby trying to navigate and make decisions, as best as it can, however full of unknown implications. "ML and AI have
revolutionized industries and our daily lives; they help video-streaming
services predict which movies we’d like to watch...we can gain amazing insights about any spot on the globe... richer data equals smarter machines..." says Forbes magazine. Given that all such learning capacity is praised by millions and millions of enablers, however tacitly ignoring the deeper implications. While betting on "revolutions" and "gain amazing insights... richer data" the discourse ignores the gradual transformation of "agency and control". The rapidly emerging reality where average human beings are bereft of traditional roles (jobs) and of control (rights). Cars and computers are made by robots, and so is the case increasingly with art, design, music etc. Progressives envisage that governance, law, security, education, planning, childcare etc etc could also be outsourced to ML and AI. Could? All that and more is transforming as we speak.
AI’s time (age) has finally come, "but more progress is needed". No we are not talking about the pitiful sense of humor, programmed inside every Alexa. "Preempted by the speed of computation, doubling every 6 months, we foresee thousands of new jobs delegated to AI, with minimum required human intervention..." - businessinsider.com 2021. "In less than 20 years, facial recognition technology went from impossible to very expensive to 4 dollars worth software... Reinforcement learning used by Deep Mind by Atari Games (1990) is now being used by Facebook to play us, at a global level...social media in it's current form is gamification.. we are not the players, we are literally the game itself..." - Brian Christian (Yale University) in a recent lecture about "the alignment problem : machine learning and human values". In his presentation, Brian Christian points out the polarization of humans corresponding to the widespread use of "reinforcement learning and digital engagement".
ML and AI systems used by Amazon, Facebook, Twitter, Instagram, TikTok, Youtube etc loosely model the way neurons interact in the brain (what makes us react). A field of study that started about 60 years ago when no one had foreseen the internet. A KPMG survey states "Activity stipulated by AI skyrocketed during Covid...". Reinforcement learning is merging neuroscience, data and AI to serve business interests. Systems which host and preempt engagement, provide consequent rewards, however always predicated by the index of profit maximization. Sectors of aviation, logistics, retail and shopping, banking, social media, tourism, healthcare services and processed food are some of the biggest investors in AI and ML as of now.
While private companies widely use ML for market expansion and customer integration - governments, public institutions and media use it as widely to host (or censor) public opinion, create propaganda and even impact elections and political power. AI and ML play a crucial role in the proliferation of Cryptocurrency and Blockchain. Stepping outside that dominant narrative around AI and ML, towards what is referred to as artificial general intelligence. "Deep mind" or "deep learning" regardless of the size and richness of data, fails to tackle general problems the way that humans do. Many researchers consider that type of AI based breakthrough decades away from becoming reality. Even less a possibility, to ever manifest in a democratic sense. "That AI is democratic and neutral, is a egalitarian notion as of now..." (Jacques Bughin and Eric Hazan) AI helping detect as well as create 'deep fake videos' is quixotic reality itself.
There are types of machine learning, such as supervised learning, unsupervised learning and reinforcement learning. Each being applied across intersecting fields of human activity. Staggering number of consumer applications, across sectors are harnessing AI in their operations. While AI has prompted "considerable benefits for businesses and economies" visible in certain cases of productivity, education and innovation, it remains a technology mandated by powers mostly private. Even as 99% of public remains at the receiving end of AI and ML, the impact on work and skill development is likely to be profound and across professions. Certain occupations as well as demand for skills will decline, while new ones will appear, steering people to work alongside ever evolving machines - as a recent article by NYU claims "of increasingly capable machines". However, a section of the scientific community does not really concur with the above claim, according to Gary Marcus. "A close look reveals that the newest systems, including DeepMind’s much-hyped Gato, are still stymied by the same old problems of bureaucracy, energy and subsequent dependence on raw materials..." (Scientific American 2022).
A large part of the appeal of AI lies in its ability to automate processes that
are normally time-consuming for humans to perform. Countries that are multi-lingual with people (natives and visitors) speaking a diverse set
of languages, is where ML has proved it's equity very well. Take the example of speech-to-text translation or robotized language trainers. Hundreds of apps and services, creating efficiency and engagement between individuals who speak different languages. A noticable number of artists, musicians, media creators, dancers, coders and their corresponding institutions are also investing big time in AI and ML. Will it be any surprise, to see robots on stage, and dare we say like rock stars, if the music actually sounds good?
"Much faster than the proliferation of software during 1990 -2000, AI is impacting the arts at multiple levels, especially the visual and music sectors... the implementation and use of AI based frameworks is rising in unprecedented forms, while steadily altering the human capacity to perceive and produce..." Janet Kraynak 2020.
"As seen across departments of the state, in countries like China, US, Germany, UK, Sweden, Denmark, Canada, Spain, France to name a few, where automated systems using ML and AI have rapidly reduced human presence across jobs and ranks..." (Jacques Bughin and Eric Hazan). Gone are many of the offices, mountains of files and intersecting workers. Yet who (man or machine) still makes the final decision in each case remains crucial. AI with such responsibility and power can be weaponized. Widespread automation, using AI systems which manage national surveillance and security is one example. The Social Credit System in China, covering nearly 96% of the working population, has alarming predicaments, where the state manifesto is rolling out via AI. Ever been blacklisted and exiled by AI?
Ironic or plain implicit, that such mainstream sirens seldom address the widespread use of AI and ML in military decision making, while launching smart missiles, rockets and drones, actions which end up killing children, women and men in far away countries. AI in the hands of 'powerful entities' equates to mass murder and destruction. Cars driven by AI, inspector drones, smart guides or robot doctors are of little or no consequence, in places across the world, where rights, security and privacy is violated increasingly. Ethical and jolly robots are at best Hollywood fiction (entertainment) and not science. Beyond the virtual “rewards” or “punishments” and learning by trial and error, AI has a long way to go (evolve). About 60 years in the making, the pioneers of this science have little to offer in terms of "ethics learning" and "acquired empathy". Machines, however intelligent and tactile, fast and never tired, "do not posses moral and cognitive capacity to differentiate good from bad.. between a target and an innocent bystander." (AI in US Military 2019)
Every incoming technology creates new challenges, even threats, necessitating decisions - new movements that determine who benefits and who loses out, and whether the
benefits at all justify the damage. Yet who justifies and who suffers? If these questions are handed over to AI’s loudest enthusiasts, myriad unforeseen outcomes await for the rest of us, more likely to be negative than positive. While not leaning on the 'carpe diem' philosophy nor falling into inherent old fears, we
believe that AI in whatever form and size, should only serve (benefit and protect)
those who are most impacted by it, and not the other way around.
0 -