“I’m not a robot” is a version of reCAPTCHA and uses various cues to determine if the user is a human or a bot. It is far more effective than previous methods of CAPTCHA, which used distorted text that users would have to transcribe, as modern bot programs are now able to decipher such text with 99.8% accuracy.
The Internet has made life so easy. Everything you want is just a click away, easily accessible from the depths of your comfort zone. Do you want to replenish your housing supplies? Go to an e-commerce site and click on the itinerary. Do you want to send money without moving an inch from your chair? Use your bank's net banking services. Want information on any niche hobby that interests you? Just read a variety of blogs specifically catered to your tastes.

However, as always happens, there are advantages and disadvantages to every revolutionary technology. In the case of the Internet, one of the many concerns in managing a digital infrastructure is unsolicited access to websites by bots.

From financial fraud to emptying the supply of goods provided by an e-commerce website, bots can wreak havoc. It has become necessary to develop increasingly advanced ways to identify who is actually accessing a website; a warm-blooded, flesh-and-blood human or a cold, hyphenated bot.

The most common way to do this today, which I'm sure you can have on the other side (unless, of course, you're living under a rock!), Is the reCAPTCHA or the single click that sets it apart from a bot.

Why do websites need to test if you are a bot?

As stated earlier, the Internet is not the ideal place we ever imagined it to be. It is full of bad actors who want to take advantage of failures in the digital infrastructure and use them to fulfil their malicious intention.

Bots can be trained to do all kinds of damage. Bots can create multiple accounts on social media platforms and email providers (Gmail), thus inflating the number of users and creating havoc on other parts of the Internet with these email accounts. They can fill out forms with unwanted content and spread, you guessed it! This also applies to comments on websites and other platforms. They make it difficult to assess actual human interaction with a platform or website.
Then there are the scrapers that use bots to collect email identifiers from users and use them for all kinds of negligence. Hackers can use "dictionary attacks" to check every word in the dictionary to crack passwords, so their passwords aren't as strong either. This is why you see an "I'm not a robot" test when logging into so many websites. Bots are also used to leave positive feedback and 5 stars on products and services, creating a false image of them.

To get around the myriad of these issues, verification is required to differentiate between a legitimate user and a bot. This is where CAPTCHAs come into the picture.

Also read : 

BlackRock malware can steal password, credit card details from Android apps clear idea


The birth of CAPTCHAs

CAPTCHA, short for "Fully Automated Public Turing Test to Distinguish Computers and Humans Apart", was developed by scientists and professors at Carnegie Mellon University and IBM in 2000. It was a way to filter out unwanted bots from websites by the use of distorted images, puzzles, audio transcription, etc. This method has been used to control credit card fraud by PayPal.
The premise of this method is that programs find it difficult to decipher distorted images, while humans can easily decode them. At one point, this CAPTCHA method was being used by 200 million users every day, which is equivalent to spending approximately 500,000 hours transcribing the encoded text. The CMU experts decided to turn all this effort into something useful and used this bot detection method to digitize classic books.
This new method was called "reCAPTCHA" and used PDF files, books, and other scanned materials as distorted evidence for the user to transcribe, which solved two problems: removing bots and digitizing classic books.
This spin-off of CAPTCHA technology was acquired by Google in 2009 and has been developed by the company.

On April 14, 2014, Google published a scientific paper claiming that it had developed image recognition systems using deep convolutional neural networks that could transcribe numbers and texts from its Street View Images. This meant that the programs were now able to resolve the most difficult CAPTCHAs with 99.8% accuracy, rendering the current system unreliable.
Still, the bot issue remained prevalent and we needed a way to remove them. Enter, No CAPTCHA reCAPTCHA.

No CAPTCHA reCAPTCHA

On December 14, 2014, Google announced that it had developed a new version of reCAPTCHA, which is quite ubiquitous today, the "I'm not a robot" click box.


This version does not make the user transcribe the distorted text but realizes with just one click if you are human or bot. This method uses the Advanced Risk Analysis backend for reCAPTCHA developed by Google and described in a blog post in 2013.


This backend process analyzes the user's commitment before, during and after writing the CAPTCHA to validate them, based on the signals to understand if a user is a bot or a human. The "I'm not a robot" test uses similar methods while using the way the user moves the cursor and the pattern to fill in the text field as some of the signals. Google does not publish all of these signals, as it would obviously defeat the purpose of restricting bots.

However, CAPTCHA has not been fully replaced and is still used with the click-box if Google feels there is a malicious presence, making it an additional signal upon which to determine user validity. However, the distorted texts have been replaced by images of, say, a cat, which the user must identify among other options.


Are “I’m not a robot” checks effective?

Google claims that following the release of the new version of reCAPTCHA, companies like Snapchat, WordPress and Humble Bundle easily adopted this method. They claim that in the first week of using No CAPTCHA reCAPTCHA, users moved to the main website much faster than with the previous methods.

As for the security aspects, adding many layers of signals makes it much more difficult to enter a site, which clearly helps with the "I am not a robot" method, compared to the single-text transcription in CAPTCHA methods previous. Google doesn't release the signals, it keeps bot makers guessing what they might be, ensuring that reCAPTCHA always has the upper hand.
This method is also a boon for the visually impaired as it reduces the time it takes to transcribe and replaces it with just one click and the occasional need for labelling. The "No CAPTCHA reCAPTCHA" could see further development in the future as more signals are added in the algorithm to verify user legitimacy.

It's safe to say that the bot problem won't go away anytime soon, but for now, it looks like humans have an advantage in the digital arms race against them!

An article by Munna Suprathik