Since first introduced in 1950, regular Turing tests aimed to judge an artificial Intelligences ability to act as if human and thereby prove the success of the AI. Unlike those tests, Captchas are usually performed as a measure of security when filling out online forms or during other potentially vulnerable requests to a server. These tests have been constructed to testify, whether a given inquiry to a system was made by a humanoid or robotic intelligence. Older versions of Captchas often appeared in the style of distorted typography to be deciphered by the user, but newer versions of Googles captcha service are based on images, most likely of public streets, pedestrian crossings or traffic lights. So far, Image based algorithms are incapable of self-learning, which makes it hard for bots to detect any data presented in image-formats and pass the captcha test for in automated attack. Algo- rithms, that are supposed to detect images containing specific subjects or motives can only improve their distinction capabilities by extending the size of the database they are communicating with.
Images of traffic spaces are used by Google because they can redirect user input from those captchas to the development of their own self-driving vehicles. Each time a Captcha is shown to a user, the relevant data is collected to re-feed that same image recognition algorithm and consequently improve the computers ability to solve the Captcha. This ongoing escalation of computational functionality consequently results in computers with the ability to appear human, shaped not solely, but also by the Captcha itself. The distinctness of humanoid and robotic action is blurred out by the same measures that were created to solve it in the first place.