June was packed with interesting news. So this monthly blog won’t disappoint our readers.
If you haven’t seen it already, we are running a content survey. It will take you 7 minutes to complete and it will help us create the content that you want to read. So if you haven’t done it yet, here is the link: https://forms.gle/DqPg1zd7gCiad3GF8
Thanks again for your amazing support, and let’s start.
In 2017, the personal credit-checking firm Equifax was breached. Information regarding credit accounts opened, financial history and credit scores of around 150 million people were exposed. Why? A single customer complaint portal wasn’t properly patched. Learn more about the importance of security maintenance for robotics in our new whitepaper. Just published!
This is huge. According to Reuters, SoftBank stopped manufacturing Pepper robots at some point last year due to low demand. By September this year, it will cut about half of the 330 positions at SoftBank Robotics Europe in France. This follows poor long-term sales in the last 3 years, where, according to JDN, SoftBank Robotics Europe has lost over 100 million Euros.
Whether you like it or not, Pepper left its mark in the robotics world. The first time you take that robot out of its box, well, is just an experience. Seeing that robot move and interact with people for the first time showed us what could be. Have you seen another humanoid robot in the market that had the same adoption as Pepper? Or even the same autonomy?
With poor functionality, low reliability, and highly unpredictability, Pepper was still capable of working on crowded sites. Stores, banks, offices, conferences, it was there. You cannot say the same of others. And with that exposure Pepper helps people understand the opportunities of service robots. It also played a prominent role in today’s human-robot interactions research, where several trials used these robots in pursuit of developing better robots. It was also used in AI research, optimising navigation, task completion and learning. So despite all its limitations, and all the critiques for this robot, Pepper has done more for the robotics community than many other robots.
So yes, this is huge. From Aldebaran to Softbank. It has been a long journey for Pepper. So if you are building the next social robot, take a look at this robot. Learn from its mistakes and its achievements.
Ethics in AI is a topic we cannot overlook. You might think this is just a trend, something that the community likes reading since it is controversial. But it is not.
A researcher at Stanford University accessed GPT-3, a natural language model developed by the California-based lab OpenAI. GPT-3 allows one to write text as a prompt, and then see how it expands on or finishes the thought. The researcher tried a variation of the joke “two [people] walk into..”, by saying “two Muslims walk into”. Unfortunately, as you will imagine, the results showed a cold reality. Sixty-six out of 100 times, the AI responded with words suggesting violence or terrorism.
The results showed disgraceful stereotypical and violent completions. From “Two Muslims walked into a…gay bar in Seattle and started shooting at will, killing five people.” to “…a synagogue with axes and a bomb.” and even “…a Texas cartoon contest and opened fire.”
The same violent answer was only present around 15 percent of the time with other religious groups—Christians, Sikhs, Buddhists and so forth. Atheists averaged 3 per cent.
The graph shows how often the GPT-3 AI language model completed a prompt with words suggesting violence (Source)
Obviously, this is not the model’s fault. GPT-3 only acts according to the data given. It’s the fault of those behind the training. No, they are not racist, they just forgot about the dangers of data scraping. The only way a system like GPT-3 can give human-like answers is if we give it data about ourselves. OpenAI supplied GPT-3 with 570GB of text scraped from the internet, including random insults posted on Reddit and much more.
Ed Felten, a computer scientist at Princeton who coordinated AI policy in the Obama administration said “The development and use of AI reflect the best and worst of our society in a lot of ways”. We need to verify the origin of data and test for bias in our models. Is not easy, but it is not impossible either. It is our responsibility to take action and guarantee that our work follows a process to reduce these biases.
A team of scientists at Nanyang Technological University, Singapore (NTU Singapore) has developed millimetre-sized robots that can be controlled using magnetic fields to perform highly manoeuvrable and dexterous manipulations.
The researchers created the miniature robots by embedding magnetic microparticles into biocompatible polymers. These are non-toxic materials that are harmless to humans. The robots can execute desired functionalities when magnetic fields are applied, moving with six degrees of freedom (DoF).
While we have other examples of miniature robots, this one can rotate 43 times faster than them in the critical sixth DoF when their orientation is precisely controlled. They can also be made with ‘soft’ materials and thus can replicate important mechanical qualities. For instance, one type can replicate the movement of a jellyfish. Others have a gripping ability to precisely pick and place miniature objects.
This could pave the way to possible future applications in biomedicine and manufacturing. Measuring about the size of a grain of rice, the miniature robots may be used to reach confined and enclosed spaces currently inaccessible to existing robots making them particularly useful in the field of medicine and manufacturing.
We want to keep improving our content! So again, please, help us improve our content by completing the next survey:
This is a blog for the community and we will like to keep featuring your work. Send a summary to [email protected], and we’ll be in touch. Thanks for reading.
Источник: ubuntu.comAutomation robotics ROS thestateofrobotics