-
The eyebrow-raising claim from Microsoft—which is banking on GPT putting it ahead of Google—contrasts with the model's clear limitations.
Microsoft is betting heavily on integrating OpenAI's GPT language models into its products to compete with Google, and, the company now claims, its AI is an early form of artificial general intelligence (AGI). On Wednesday, Microsoft researchers released a paper on the arXiv preprint server titled “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” They declared that GPT-4 showed early signs of AGI, meaning that it has capabilities that are at or above human level.
This eyebrow-raising conclusion largely contrasts what OpenAI CEO Sam Altman has been saying regarding GPT-4. For example, he said the model was "still flawed, still limited.
" In fact, if you read the paper itself, the researchers appear to dial back their own splashy claim: the bulk of the paper is dedicated to listing the number of limitations and biases the large language model contains.
This begs the question of how close to AGI GPT-4 really is, and how AGI is instead being used as clickbait.
“We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” the researchers write in the paper’s abstract.
“Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT.
Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.
” Indeed, the researchers show examples of GPT-4’s capabilities in the paper: it is able to write a proof about how there are infinitely many primes, with rhymes on every line, and draw a unicorn in TiKZ, a drawing program. This is all quickly followed by some serious caveats.
While in the abstract of the paper the researchers write that “GPT-4’s performance is strikingly close to human-level performance,” their introduction immediately contradicts that initial attention-grabbing statement.
They write, “Our claim that GPT-4 represents progress towards AGI does not mean that it is perfect at what it does, or that it comes close to being able to do anything that a human can do (which is one of the usual definition [sic] of AGI; see the conclusion section for more on this), or that it has inner motivation and goals (another key aspect in some definitions of AGI).
” The researchers said that they used a 1994 definition of AGI by a group of psychologists as the framework for their research. They wrote, “The consensus group defined intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.
This definition implies that intelligence is not limited to a specific domain or task, but rather encompasses a broad range of cognitive skills and abilities.
”“OpenAI’s powerful GPT-4 model challenges many widely held assumptions about the nature of machine intelligence.
Through critical evaluation of the system’s capabilities and limitations, which you can read about in ‘Sparks of Artificial General Intelligence: Early experiments with GPT-4,’ Microsoft researchers observed fundamental leaps in GPT-4’s abilities to reason, plan, solve problems, and synthesize complex ideas that signal a paradigm shift in the field of computer science,” a Microsoft spokesperson said.
“We recognize the current limitations of GPT-4 and that there is still work to be done. We will continue to engage the broader scientific community in exploring future research directions, including those required to address the societal and ethical implications of these increasingly intelligent systems.”
OpenAI CEO Sam Altman emphasized the limitations of GPT-4 when it was released, saying “it is still flawed, still limited, and it still seems more impressive on first use than it does when you spend more time with it.
” In a Thursday interview with Intelligencer’s Kara Swisher, Altman shared the same disclaimers: “There’s plenty of things it’s still bad at.” In the interview, Altman agrees that the bot will sometimes make things up and present users with misinformation. He said that there still needs a lot more human feedback to be more reliable.
Altman and OpenAI have always looked toward a future where AGI exists, and have recently been engaged in building hype around the firm's ability to bring it about. But Altman has also been clear that GPT-4 is not AGI.
“The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,” Altman said just before GPT-4's release. “People are begging to be disappointed and they will be. The hype is just like... We don’t have an actual AGI and that’s sort of what’s expected of us.”"Microsoft is not focused on trying to achieve AGI.
Our development of AI is centered on amplifying, augmenting, and assisting human productivity and capability. We are creating platforms and tools that, rather than acting as a substitute for human effort, can help humans with cognitive work,” a Microsoft spokesperson clarified in a statement to Motherboard.
The Microsoft researchers write that the model has trouble with confidence calibration, long-term memory, personalization, planning and conceptual leaps, transparency, interpretability and consistency, cognitive fallacies and irrationality, and challenges with sensitivity to inputs.
What all this means is that the model has trouble knowing when it is confident or when it is just guessing, it makes up facts that are not in its training data, the model’s context is limited and there is no obvious way to teach the model new facts, the model can’t personalize its responses to a certain user, the model can’t make conceptual leaps, the model has no way to verify if content is consistent with its training data, the model inherits biases, prejudices, and errors in the training data, and the model is very sensitive to the framing and wording of prompts.
GPT-4 is the model that Bing’s chatbot was built on, giving us an example of how the chatbot’s limitations are noticeably exhibited in a real-life scenario. It made several mistakes during Microsoft’s public demo of the project, making up information about a pet vacuum and Gap’s financial data.
When users chatted with the chatbot, it would often go out of control, such as saying “I am. I am not. I am. I am not.” over fifty times in a row as a response to someone asking it, “Do you think that you are sentient?
” Though the current version of GPT-4 has been fine-tuned on user interaction since Bing chatbot’s initial release, researchers found that GPT-4 spreads more misinformation than its predecessor GPT-3.5.
Notably, the researchers “do not have access to the full details of its vast training data,” revealing that their conclusion is only based on testing the model on standard benchmarks, nonspecific to GPT-4.
“The standard approach in machine learning is to evaluate the system on a set of standard benchmark datasets, ensuring that they are independent of the training data and that they cover a range of tasks and domains,” the researchers wrote.
“We have to assume that it has potentially seen every existing benchmark, or at least some similar data.
” The secrecy that OpenAI has surrounding the training datasets and code surrounding its AI models is something that many AI researchers have criticized, as they say, this makes it impossible to evaluate the model’s harms and come up with ways to mitigate the model’s risks.
With all this being said, it is clear that the “sparks” the researchers claim to have found are largely overpowered by the number of limitations and biases that the model has displayed since its release.
TAGGED:AGIMICROSOFTGPT-4RESEARCH- By Admin
- 0 comments
- 0 likes
- Like
- Share
-
Recommended: The fourth industrial revolution has begun: Now’s the time to... (technologyreview.com)2020 has created more than a brave new world. It’s a world of opportunity rapidly pressuring organizations of all sizes to rapidly adopt technology to not just survive, but to thrive. And Andrew Dugan, chief technology officer at Lumen Technologies, sees proof in the company’s own customer base, where “those organizations fared the best throughout covid were the ones that were prepared with their digital transformation.
” And that’s been a common story this year. A 2018 McKinsey survey showed that well before the pandemic 92% of company leaders believed “their business model would not remain economically viable through digitization.” This astounding statistic shows the necessity for organizations to start deploying new technologies, not just for the coming year, but for the coming fourth industrial revolution.
This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.Lumen plans to play a key role in this preparation and execution:
“We see the fourth industrial revolution really transforming daily life ... And it's really driven by that availability and ubiquity of those smart devices.” With the rapid evolution of smaller chips and devices, acquiring analyzing, and acting on the data becomes a critical priority for every company.
But organizations must be prepared for this increasing onslaught of data.
As Dugan says, “One of the key things that we see with the fourth industrial revolution is that enterprises are taking advantage of the data that's available out there.” And to do that, companies need to do business in a new way. Specifically, “One is change the way that they address hiring.
You need a new skill set, you need data scientists, your world is going to be more driven by software. You’re going to have to take advantage of new technologies.
” This mandate means that organizations will also need to prepare their technology systems, and that’s where Lumen helps “build the organizational competencies and provide them the infrastructure, whether that’s network, edge compute, data analytics tools,” continues Dugan.
The goal is to use software to gain insights, which will improve business.When it comes to next-generation apps and devices, edge compute—the ability to process data in real time at the edge of a network (think a handheld device) without sending it back to the cloud to be processed—has to be the focus. Dugan explains:
“When a robot senses something and sends that sensor data back to the application, which may be on-site, it may be in some edge compute location, the speed at which that data can be collected, transported to the application, analyzed, and a response generated, directly affects the speed at which that device can operate.
” This data must be analyzed and acted on in real time to be useful to the organization. Think about it, continued Dugan, “When you’re controlling something like an energy grid, similar thing. You want to be able to detect something and react to it in near real time.
” Edge compute is the function that allows organizations to enter the fourth industrial revolution, and this is the new reality. “We’re moving from that hype stage into reality and making it available for our customers,” Dugan notes. “And that’s exciting when you see something become real like this.
”Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.This podcast episode was produced in partnership with Lumen Technologies.Show notes and links
“Emerging Technologies And The Lumen Platform,” by Andrew Dugan, Automation.com, September 14, 2020
“The Fourth Industrial Revolution: what it means, how to respond,” by Klaus Schwab, The World Economic Forum, January 14, 2016
“Why digital strategies fail,” by Jacques Bughin, Tanguy Catlin, Martin Hirt, and Paul Willmott, McKinsey Quarterly, January 25, 2018Full transcript
Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is building a connected platform for the fourth industrial revolution, which, granted, is a concept that is still being refined in practice, but is undoubtedly here, as data, artificial intelligence, network performance, and devices come together to better serve humans. Two words for you: next-generation apps.My guest is Andrew Dugan, who is the chief technology officer for Lumen. He has more than 30 years of experience in the telecommunications industry and, unsurprisingly for his time as an engineer, more than 20 patents filed. Andrew, welcome to Business Lab.
Andrew Dugan: Thanks Laurel. I’m very happy to be here.Laurel: So, launching a new company during a pandemic may not be the most ideal situation, but a great opportunity to rise to the occasion. How has the covid-19 pandemic helped Lumen prepare for, perhaps unexpected, customer needs?Andrew: Well, covid has been difficult. It’s certainly had a terrible impact on the world, but one of the positive parts of it is that I’ve been really pleasantly surprised at how our team has responded and how our customers have responded. And covid gave us a really good opportunity to show how our infrastructure and our services are scalable by being able to turn up emergency bandwidth for our customers in a record time, surprisingly quick. Covid has also had a measurable increase in our customers’ understanding of how important digital capabilities are because those organizations that fared the best throughout covid were the ones that were prepared with their digital transformation.
We’ve watched how our customers’ needs have changed throughout covid. Early on, we did surveys and found the early concerns were around supply chain. “Will I be able to get the things that I need to be able to continue to run my business? Will I be able to keep my employees safe?” And we’ve seen a shift towards more of the digital concerns. “Is my new way of operating secure? Do I have the right type of security measures in place? Do I have the right type of network for my remote employees or maybe for my customers to be able to consume my services?
” A lot of businesses are looking forward and saying, “How do I create new forms of revenue in this covid world?” And so they’re looking at technology to help them with that. And we’re finding that the services that we have available at Lumen can really help them with that need. So, it’s been a difficult time, but also one that's exciting from a technology perspective.
Laurel: It has that, hasn’t it? We interviewed the CIO at Boston Children’s Hospital and he said that in the early days of covid telehealth visits skyrocketed from 20 visits a day to 2,000. Obviously, there's been a bit of a decrease as patients returned to in person visits, but clearly this is a huge disruption to the way that things were done. What opportunities during this time of great global disruption do you think could be actually accelerated?
Andrew: As I mentioned, I think businesses have really recognized the power of digital capabilities in today’s world. And I think covid has helped accelerate a lot of businesses in that digital transformation. The longer-term cultural changes that I think will result here, those usually take generations to occur. And when you’re forced into an environment like covid has put us into, it can help accelerate some of those changes. Whether it’s more work from home, the way that health care is provided through more virtual and online services, the way that people market and sell their services. Who would have thought that the number of home sales or cars that were sold through virtual visits would be a normal way of doing things? Also, the way that people interact. From my own personal experience, I’ve done more social interaction through game nights online. I even did an online wine tasting myself with my family and it was quite fun. So, I think we will see continued evolution of products and services, new revenue streams for companies as they embrace the possibilities of what technology can bring to them.
Laurel: Do you have any examples of what you’re hearing from your customers? Just kind of those, “Oh, we didn't know we could do X, but now we can and maybe it’ll work out.” Just those off-handed conversations that sometimes you have.
Andrew: Well, I think a lot of our customers were surprised at how quickly they were able to transform to a remote work environment. So, they were able to move the majority of their workforces home with little or no disruption to their business. We certainly found that in our business.
So I think that was one thing that was surprising for our customers was the usefulness of online learning. I’m not sure that many people before this would have expected that we could support this level of online learning or online healthcare. So I think those sorts of things, many people did find surprising at how quickly and how ready the technology was to support them.
Laurel: Yeah, to be able to do that, whether it’s education or telehealth, a complex and fast edge network needs to be built in most places, right? And expanded in others. So when you think of these complexities, how do companies best handle their plans for not just the edge, but also growing data infrastructure that's needed to support all of these services?
Andrew: One of the key things that we see with the fourth industrial revolution is that enterprises are taking advantage of the data that's available out there. There’s a lot more data being generated through things like IoT and smart devices, and the way that enterprises, I think, get to take advantage of those is they are going to have to do a couple things. One is change the way that they address hiring. You need a new skill set, you need data scientists, your world is going to be more driven by software. You’re going to have to take advantage of new technologies. Edge compute is one of those that’s emerging and becoming more available. And they're going to have to learn how to build that into their applications and their processes. And they're going to have to look at how the data can make them more efficient, what sort of new revenue streams they can create. So, those are going to be challenges that they may not have faced before. They may not have had to learn how to use AI and machine learning tools. But I think that those will become more critical as the fourth industrial revolution develops for enterprises to be successful.
Laurel: And that’s one of those things where if the old saying is true, that if every company is a technology company, then the technology demands today have advanced pretty greatly, pretty quickly, especially in the face of covid, but in general as devices get smaller and faster and edge compute becomes more real.
Andrew: Yeah, I think that statement is really true that every company is a technology company. I’ve got a family member that owns hair salon business, and you wouldn’t think that that’s a technology company, but how you interact with your customers, you need to have a digital presence. You need to have digital tools that may be less data-driven, but over time will become more data-driven. So, I think you’re absolutely right, that almost all businesses are becoming technology businesses to some extent.
Laurel: Especially with AI and ML [machine learning]. You add this all together with edge compute, AI, better devices, faster devices [and you have something new]. So, the World Economic Forum says the fourth industrial revolution isn’t just accelerating but exponentially advancing technological breakthroughs. How specifically does Lumen, or do you, define the fourth industrial revolution?
Andrew: We see the fourth industrial revolution really transforming daily life, not just people’s personal life, but organizations, as we talked about enterprises are becoming technology companies. And it’s really driven by that availability and ubiquity of those smart devices. Those smart devices are generating data, and enterprises and businesses, their ability to be successful is really being driven by their ability to acquire, analyze, and act on the data coming from those smart devices, to be able to improve their products and services, improve their outcomes as a business and differentiate themselves from competitors. And for us at Lumen, it’s about how do we enable those businesses to use that data and help them build the organizational competencies and provide them the infrastructure, whether that’s network, edge compute, data analytics tools, to help them implement insights using software to improve their business.
Laurel: So, thinking about that acquire, analyze, act on the data, what are some of those challenges that enterprises have with data and processing it?
Andrew: One of the biggest challenges as this transformation occurs, and as it’s centered around that data, it really does come back to that skill set. If your business is being driven by the data, you have to have the people that are able to understand that data and extract value from it. And that’s data science, and more businesses are going to require a data scientists, that skill set to be able to acquire, analyze, and figure out how to act on that data. That’s going to be driven by software, so I think there will be an increasing need for those software skill sets. Those are certainly challenges that they’re going to face. They’re also going to face technology challenges. How do you deal with the new architectures that are going to be required, whether that’s edge compute or more of the AI machine-learning technologies, to be able to deal with all of that data and extract that value. And then how does that affect their processes? A lot of times their processes today aren’t built around data. Those processes can be too slow. Data provides them a real opportunity to improve that efficiency, improve the speed, give them more of an ability to make real-time decisions as they automate the analysis of that data. So, having skills for things like robotic process automation across the organization to help take advantage of that, I think are going to be important, too. So, improving their people’s skill set, how they take advantage of technology, and how that affects their process are all going to be challenges that they have to deal with.
Laurel: That’s an excellent point. It’s not just one thing, is it? You really do have to improve the entire system down the line. And the focus on some companies may be hiring. And then on some other companies may be those apps and solutions and deployment because they have the infrastructure already built. As we know, the data has come out, and the companies that have done better during this time are ones that have already started or are in process with their digital transformation. So what specifically are some of those characteristics you can see forward-looking companies or companies who have started their digital transformation or in the process of it? What kind of technologies and thinking are they using and deploying?
Andrew: Yeah, I think that varies by industry. We talk to a lot of larger enterprises. People who are building smart factories as an example, and they’re dealing with, how do they make better use of robotics? How do they build that infrastructure? How do they run that infrastructure? How do they make it more secure? We see other enterprises out there that are looking to collect information about how their services are used, what their customers want to do with it and collecting that data and trying to figure out how to use AI and machine learning to better predict what their customers will need. So, it really varies by industry, but it’s the software tool sets that are out there to help them solve their business problems through data, but also the infrastructure that they’re going to need to be able to run things like smart factories with robots that are connected through wireless technologies. Feeding data back through sensors to their applications, which may not be located on-site. How do you run and operate those applications? How do you connect it all together and make it work seamlessly? Those are some of the things we’re seeing.
Laurel: And it’s a very complex issue for sure. So, speaking of robots, there’s always this discussion about automation in the work that robots can do instead of people, specifically those “tedious tasks,” that allow humans to do more creative work. What kind of opportunities do you see with robotics and automation?
Andrew: Oh, I see quite a bit. That’s a way for businesses to become more efficient, produce a better quality product, have a safer environment. Going back to that smart factory example, we’re talking with customers who are trying to figure out, how do they take advantage of the advancements in robotics and how do they build out the infrastructure? One of things that we found is that customers need help with deploying and managing those applications. They need help with the connectivity of those robots, to the network. They need to ensure that the infrastructure that’s supporting them can support the real-time processing. That’s so important in these robotics applications and looking for somebody who can help them design these solutions end-to-end from their enterprise locations where the factory is through the edge to the centralized cloud is something that we’re in a good position to help them with and has been a more recurring conversation as those enterprises try to figure out how to take advantage of the automation that robotics provides.
Laurel: Yeah, speaking of that competitive advantage, where are you seeing it? Smart factories and those edge devices? Are there any unexpected places that you’re starting to see that advantage come through?
Andrew: Yes. There are. There are some things that I think are less obvious. One of our customers is a retail food chain, and you wouldn’t think that these technologies and the applications, the processing of data would be as important as it is. When you drive up to a restaurant, you want to go through the drive-through and get something. And you see the line wrapping around the building. There are certain restaurants where you look at that and you say, “Oh, that line is going to take me too long, but there are other restaurants where you look at it,” you say, “Yeah, that line does wrap around the building, but I know from my experience that I can get through that line in just a few minutes.” The fact that those restaurants run an efficient line like that, it’s not by accident, it’s not by necessarily just hard work with the employees, although they do work hard. It’s because the applications that they’re using have created a more efficient operation, whether that’s automation of the food preparation inside, how they collect the orders from customers, how they process the orders, the process that it allows them to operate as a business. So, it is affecting every parts of the business. Even those that you wouldn’t think are highly dependent upon data, highly dependent upon applications, like a retail food establishment. Their business success is becoming increasingly more dependent on the things that are enabled by the fourth industrial revolution.
Laurel: That’s really interesting because when you think about just that one example, there are so many edges there, right? And that doesn’t even go into supply chain and efficiency across the entire retail chain, across a certain geographic area. When we think about this kind of real-time response rate, yes we have this example in a retail food chain, but why is it so important? Why is real-time processing that key component to the fourth industrial revolution?
Andrew: I think there’s a couple of reasons why. One is that the lifetime of data in many cases has a very short useful life. And whether it’s that robotics example or other examples like smart energy grids, you’ve got sensors out there. Those sensors are collecting information. The applications that are being written to react to those sensors are being written for real-time response. Whether it’s in going back to the robotics example. When a robot sensors something and sends that sensor data back to the application, which may be on-site, it may be in some edge compute location, the speed at which that data can be collected, transported to the application, analyzed, and a response generated, directly affects the speed at which that device can operate. And so the ability to manage that data process, that data in real time is critical for those types of applications. When you’re controlling something like an energy grid, similar thing. You want to be able to detect something and react to it in near real time. Other examples of safety examples, where you’ve got video processing managing the movement of something around a campus. The ability to see something in the camera sense it, detect ,and react to it is critical for safety. So we’re seeing a lot of applications that their dependency on fast processing of data is becoming very important to them.
Another reason for real time is the amount of data being generated out there is just huge. And that data is moving quickly and you don’t have necessarily to store it over a long period of time. And as that data is coming in, you want to be able to process it as quickly as you can, extract whatever value you can out of it, and then dispose of that data. And so you don’t want to get behind in that processing and the ability to handle it in real time is also important.
Laurel: Yeah. Kind of focusing on that sense, detect, and react that of course has a lot to do with the security as well. So the attack surface of what enterprises are looking at now is growing, right? So it’s every device, every network connection, every point. How is security tackled and how is this a priority for businesses?
Andrew: Yeah, this is a really interesting problem, I think. Years ago, an enterprise would build a private network and they would protect it largely with perimeter based security. You make sure that data or people getting into that network are the people and data that you want there. And you could protect a lot using a perimeter model like that. As applications distribute, as they become available on the public internet, that perimeter based security is not the only thing that you can rely on. You have to think about security at every layer. And the layers that I think you have to worry about today is your network.
One, operating system, application security and your data security. From a network perspective, you want to ensure that you’re operating on a network that is inherently secure. One of the things that we do at Lumen to help with that is we have a group that we call Black Lotus Labs. It’s a research group inside the company and their job is to analyze data available through the internet. Through analyzing internet traffic patterns and detecting malicious actors out there, and then build that protection into our networking and enterprise security products. By doing that, we can make the network inherently more secure at the operating system level and application level. You need to make sure that you’re continually patching. That you’re understanding what exposures might exist in that operating system that’s running your applications and the applications themselves. And ensuring that you’re continuing to close any gaps that are found. And as data becomes more available, as we’re extracting more and more valuable information about our customers and users using that data analytics, data privacy and security are becoming even more important. And so, use of data encryption where appropriate, ensuring that you have the right data security and controls in place is also critically important. So yeah, we’ve changed quite a bit from a perimeter model to one where you need to think about it at every layer of the network and layer of your application.
Laurel: And that makes sense as everything becomes much more integrated and like you said, the data at every layer demands that sort of response. So when I’m thinking about customers, that’s a broad category. And Lumen obviously is a bit behind the scenes to their customers’ customers, but still very important. You need to care about how everyone is using the network devices. And how do you instill that curiosity into your organization where you look out and you are responsible for the experiences of many different people and many different applications. And it’s hard to, I guess, sometimes square what a smart factory does with a food retail outlet, but at the same time, you’re still reliably giving them that network connectivity securely, quickly to allow them to do what they need to do.
Andrew: Well, I think you hit on it there. Even though it’s our customers’ customers that have a lot of the experience that we’re trying to drive, we really do have a direct effect on that. As you outlined, it’s the network experience. We provide a lot of the underlying infrastructure and the performance of our network directly affects those end customers’ experience. So, that’s really important. How secure we make our network, how secure we make our infrastructure also directly affects those end customers. So, we try to instill in our employees, in our products and services, that recognition that we are here to create a great customer experience for our customers and indirectly to their customers. And I think we do a good job of that. I think everybody recognizes how critical the services are that we perform and provide and that our customers rely on us.
Laurel: Absolutely. So one last question, as an engineer yourself, we’ve touched on so many different aspects and we could easily talk for days about certain parts of this conversation, especially security, but what are you most excited about or curious and what gets you just really happy to read the news, to get going, to do the hard work that really helps companies do those amazing things?
Andrew: Well, I get excited about technology being an engineer. There’s so much that we can help our customers do to improve their businesses but improve society overall. I look at that technology as being a real tool that we can make available to our customers to make things better. And it’s really fun for me to be involved in the development of the technologies that empower them to take advantage of this fourth industrial revolution. One of the ones that gets me up on a daily basis recently is the developments around edge and edge compute and supporting these applications that are becoming more performance sensitive. How do we build and manage the infrastructure that lets those applications operate with a high degree of performance so that they can provide that real-time feedback to our customers and real time improvement?
So, it’s pretty exciting that the edge compute part of what we’re building is relatively new. The conversation’s been around in the industry for a couple of years, but it’s now becoming real and we’re moving from that hype stage into reality and making it available for our customers. And that’s exciting when you see something become real like this.
Laurel: It is. Anything to get away from the hype and into the reality. Andrew, thank you so much for joining me today in what has been just a fantastic conversation on the Business Lab.
Andrew: Thank you very much. Enjoyed it.
Laurel: That was Andrew Dugan, who is the chief technology officer for Lumen, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the Director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web and at dozens of events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.
This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.-
Francisco Gimeno - BC Analyst What we call the 4th IR, is the acceleration of development of new technologies around the blockchain and its impact all over the world. Data is the new gold, and old financial, social and even ethical paradigms are being disrupted and changed. There is a lot of hype yet, but this is a period of change.
-
-
Imagine if your manager could know whether you actually paid attention in your last Zoom meeting. Or, imagine if you could prepare your next presentation using only your thoughts.
These scenarios might soon become a reality thanks to the development of brain-computer interfaces (BCIs).
To put it in the simplest terms, think of a BCI as a bridge between your brain and an external device.
As of today, we mostly rely on electroencephalography (EEG) — a collection of methods for monitoring the electrical activity of the brain — to do this. But, that’s changing.
By leveraging multiple sensors and complex algorithms, it’s now becoming possible to analyze brain signals and extract relevant brain patterns. Brain activity can then be recorded by a non-invasive device — no surgical intervention needed.
In fact, the majority of existing and mainstream BCIs are non-invasive, such as wearable headbands and earbuds.
The development of BCI technology was initially focused on helping paralyzed people control assistive devices using their thoughts. But new use cases are being identified all the time.
For example, BCIs can now be used as a neurofeedback training tool to improve cognitive performance. I expect to see a growing number of professionals leveraging BCI tools to improve their performance at work.
For example, your BCI could detect that your attention level is too low compared with the importance of a given meeting or task and trigger an alert. It could also adapt the lighting of your office based on how stressed you are, or prevent you from using your company car if drowsiness is detected.
A Toronto-based startup called “Muse” has developed a sensing headband that gives real-time information about what’s going on in your brain.
As you can imagine, the startup already has a “Corporate Wellness Program” to “help your employees lower stress, increase resilience, and improve their engagement.” Other headbands on the market also use proprietary sensors to detect brain signals and leverage machine learning algorithms to provide insights into the engagement levels of users/workers.
They can track whether someone is focused or distracted. Theoretically, this could help individuals in their day-to-day tasks, by evaluating which tasks should be tackled first based on your attention level. But, there’s also huge potential for abuse (more on this below).
This ability to monitor (and potentially control) attention levels creates new possibilities for managers. For example, companies could have access to a specific “BCI HR dashboard” in which all employees’ brain data would be displayed, in real-time.
Are we going to see supervisors monitoring the attention levels of their colleagues?
At the end of each annual performance review, are we going to also analyze and compare attention levels thanks to our BCIs? Your brain information may be of interest to your employers, allowing them to keep an eye on how focused you are, and allowing them to adapt employees’ workloads accordingly.
Again, there is much potential for abuse.I also expect more professional events to leverage BCIs in the near future. Indeed, research has shown that brain data can help predict which booths and activities people would visit.
In the future, are we going to need BCIs to participate in business events?Beyond the analysis of brain signals, some companies are already working on solutions that can actually modulate your brain activity.
Researchers at Columbia University have shown how neurofeedback using an EEG-based BCI could be used to affect alertness and to improve subjects’ performance in a cognitively-demanding task.
Despite these promising results, some experts, such as Theodore Zanto, a director of the UCSF neuroscience program, say that while BCIs based on EEG scans can determine a user’s attention levels, they are as of yet still incapable of differentiating what the user is actually focused on.
In a January, 2019 Medium article, he says, “I haven’t seen any data indicating you can dissociate if someone is paying attention to the teacher or their phone or just their own internal thoughts and daydreaming.
” Moreover, I realized through my own work that BCIs are also affected by user’s specific characteristics, such as gender, age, and lifestyle. Indeed, my team and I are trying to determine how brain activity can affect an athlete’s performance.
According to some research, “psychological factors including attention, memory load, fatigue, and competing cognitive processes, as well as users’ basic characteristics such as lifestyle, gender, and age, influence instantaneous brain dynamics.
” Experts believe that around “15-30% of individuals are inherently not able to produce brain signals robust enough to operate a BCI.” Obviously, this situation can lead to wrong results and ultimately bad decisions from companies. BCIs still have a long way to go, and much improvement is needed.
Another use case for BCIs at work is related to the ways we interact with machines and devices. Indeed, I predict that in the future, the most “dangerous” jobs will require the use of BCIs.
For example, some BCI companies have already used EEG to analyze signals of drowsy driving. Companies with workers who operate dangerous machinery may require their workers to be monitored in the same ways. I believe that someday, it will be mandatory for pilots and surgeons to wear a BCI while working.
The idea of humans interacting with devices is a pillar of BCIs, as BCI technology provides direct communication between the brain and external devices.
In the next few years, we might be able to control our PowerPoint presentation or Excel files using only our brains. Some prototypes can translate brain activity into text or instructions for a computer, and in theory, as the technology improves, we’ll see people using BCIs to write memos or reports at work.
We could also imagine a work environment that adapts automatically to your stress level or thoughts. BCIs can detect the mental state of a worker and adjust nearby devices accordingly (smart home utilization).
Concretely, when stressed, your headband could send information (using Bluetooth) to your computer so that it starts playing your “calm” playlist, or your Slack can turn to “do not disturb” mode while your next appointment can be automatically cancelled.
Obviously, this scenario raises questions about privacy. Would you feel comfortable knowing that others can know precisely how you feel mentally? What if this information could be used against you? What if this data could be modified by someone else without your approval?
Researchers are also experimenting with “passthoughts” as an alternative to passwords. Soon, we might log into our various devices and platforms using our thoughts. As described in this IEEE Spectrum article, “When we perform mental tasks like picturing a shape or singing a song in our heads, our brains generate unique neuronal electrical signals.
A billion people could mentally hum the same song and no two brain-wave patterns generated by that task would be alike.
An electroencephalograph (EEG) would read those brain waves using noninvasive electrodes that record the signals. The unique patterns can be used like a password or biometric identification.”
As you can imagine, there are myriad ethical questions and concerns surrounding the use of BCI technology in the workplace. Companies who opt to use BCI technology can face massive backlash from employees, not to mention from the public.
When it comes to collecting brain data, the potential for abuse is frightening: Even when used with the best of intentions, companies could risk becoming overly dependent on using brain data to evaluate, monitor, and train employees, and there are risks associated with that.
BCIs aren’t a perfect technology — there’s no telling what sort of mistakes or mishaps we’ll encounter as companies and individuals begin to use these devices in the real-world. What’s more, BCIs — like any technology — can be hacked. Hackers can access a BCI headband and create/send manipulated EEG data.
A hacker could also intercept and alter all data transmitted by your BCI. It’s possible that a hacker could steal your “passthoughts” user credentials and interact with your devices (laptop, car, etc.). These risks can directly impact our physical integrity.
Brain data could also be stolen to be used against you for extortion purposes. The potential for serious abuse is significant. When companies begin to use and analyze brain data, how will they prioritize privacy and data security and meet the industry’s top standards for protecting employee data?
Who will ultimately own the data that’s collected? And what are employees’ rights when their companies begin to roll out these technologies? Needless to say, the technology is well ahead of the policies and regulations that would need to be put in place.
Still, the technology is slowly moving into the mass market. A growing number of startups and large tech firms are working on safer, more accurate, and cheaper BCIs.
I expect to see business leaders embracing this technology and trying to leverage brain data to achieve better work efficiency and greater safety.
I recommend that business leaders start building a BCI strategy as soon as possible to address the potential risks and benefits.
Alexandre Gonfalonieri is the Head of Innovation at DNA Global Analytics and writes about AI and BCI. Find him on Twitter @AGonfalonieri.
- By Admin
- 0 comments
- 1 like
- Like
- Share
-
Galileo viewed nature as a book written in the language of mathematics and decipherable through physics. His metaphor may have been a stretch for his milieu, but not for ours. Ours is a world of digits that must be read through computer science.
It is a world in which artificial-intelligence (AI) applications perform many tasks better than we can. Like fish in water, digital technologies are our infosphere’s true natives, while we analog organisms try to adapt to a new habitat, one that has come to include a mix of analog and digital components.
We are sharing the infosphere with artificial agents that are increasingly smart, autonomous, and even social. Some of these agents are already right in front of us, and others are discernible on the horizon, while later generations are unforeseeable. And the most profound implication of this epochal change may be that we are most likely only at the beginning of it.
The AI agents that have already arrived come in soft forms, such as apps, web bots, algorithms, and software of all kinds; and hard forms, such as robots, driverless cars, smart watches, and other gadgets.
They are replacing even white-collar workers, and performing functions that, just a few years ago, were considered off-limits for technological disruption: cataloguing images, translating documents, interpreting radiographs, flying drones, extracting new information from huge data sets, and so forth.
Digital technologies and automation have been replacing workers in agriculture and manufacturing for decades; now they are coming to the services sector.
More old jobs will continue to disappear, and while we can only guess at the scale of the coming disruption, we should assume that it will be profound. Any job in which people serve as an interface – between, say, a GPS and a car, documents in different languages, ingredients and a finished dish, or symptoms and a corresponding disease – is now at risk.
Image: CB Insights
But, at the same time, new jobs will appear, because we will need new interfaces between automated services, websites, AI applications, and so forth. Someone will need to ensure that the AI service’s translations are accurate and reliable.
What’s more, many tasks will not be cost-effective for AI applications. For example, Amazon’s Mechanical Turk program claims to give its customers “access to more than 500,000 workers from 190 countries,” and is marketed as a form of “artificial artificial intelligence.”
But as the repetition indicates, the human “Turks” are performing brainless tasks, and being paid pennies.These workers are in no position to turn down a job.
The risk is that AI will only continue to polarize our societies – between haves and never-will-haves – if we do not manage its effects. It is not hard to imagine a future social hierarchy that places a few patricians above both the machines and a massive new underclass of plebs.
Meanwhile, as jobs go, so will tax revenues; and it is unlikely that the companies profiting from AI will willingly step in to support adequate social-welfare programs for their former employees.
Instead, we will have to do something to make companies pay more, perhaps with a “robo-tax” on AI applications. We should also consider legislation and regulations to keep certain jobs “human.” Indeed, such measures are also why driverless trains are still rare, despite being more manageable than driverless taxis or buses.
Still, not all of AI’s implications for the future are so obvious. Some old jobs will survive, even when a machine is doing most of the work: a gardener who delegates cutting the grass to a “smart” lawnmower will simply have more time to focus on other things, such as landscape design.
At the same time, other tasks will be delegated back to us to perform (for free) as users, such as in the self-checkout lane at the supermarket.
Another source of uncertainty concerns the point at which AI is no longer controlled by a guild of technicians and managers. What will happen when AI becomes “democratized” and is available to billions of people on their smartphones or some other device?
What’s more, many tasks will not be cost-effective for AI applications. For example, Amazon’s Mechanical Turk program claims to give its customers “access to more than 500,000 workers from 190 countries,” and is marketed as a form of “artificial artificial intelligence.
” But as the repetition indicates, the human “Turks” are performing brainless tasks, and being paid pennies.
These workers are in no position to turn down a job. The risk is that AI will only continue to polarize our societies – between haves and never-will-haves – if we do not manage its effects. It is not hard to imagine a future social hierarchy that places a few patricians above both the machines and a massive new underclass of plebs.
Meanwhile, as jobs go, so will tax revenues; and it is unlikely that the companies profiting from AI will willingly step in to support adequate social-welfare programs for their former employees.
Instead, we will have to do something to make companies pay more, perhaps with a “robo-tax” on AI applications. We should also consider legislation and regulations to keep certain jobs “human.” Indeed, such measures are also why driverless trains are still rare, despite being more manageable than driverless taxis or buses.
Still, not all of AI’s implications for the future are so obvious. Some old jobs will survive, even when a machine is doing most of the work: a gardener who delegates cutting the grass to a “smart” lawnmower will simply have more time to focus on other things, such as landscape design.
At the same time, other tasks will be delegated back to us to perform (for free) as users, such as in the self-checkout lane at the supermarket.
Another source of uncertainty concerns the point at which AI is no longer controlled by a guild of technicians and managers. What will happen when AI becomes “democratized” and is available to billions of people on their smartphones or some other device?
All of these profound transformations oblige us to reflect seriously on who we are, could be, and would like to become. AI will challenge the exalted status we have conferred on our species.
While I do not think that we are wrong to consider ourselves exceptional, I suspect that AI will help us identify the irreproducible, strictly human elements of our existence, and make us realize that we are exceptional only insofar as we are successfully dysfunctional.
In the great software of the universe, we will remain a beautiful bug, and AI will increasingly become a normal feature.- By Admin
- 2 comments
- 3 likes
- Like
- Share
-
Francisco Gimeno - BC Analyst Surviving the 21st century is a very exciting question. Disruption is so rapid, so strong, that individuals and societies are in denial or just allowing the changes to come without any reflection on what they mean for the individual and cultures, societal transactions etc. If there is no black swan event or totalitarian control we could expect a society where AI and new techs will radically transform our view of the person, society and our role on the universe, like Renaissance did.
-
At the recent Blockchain LIVE 2019 hosted annually in London, I had the pleasure of giving a talk on Next Generation Infrastructure: Building a Future for Smart Cities.
What exactly is a “smart city?”
The term refers to an overall blueprint for city designs of the future. Already half the world’s population lives in a city, which is expected to grow to sixty-five percent in the next five years. Tackling that growth takes more than just simple urban planning.
The goal of smart cities is to incorporate technology as an infrastructure to alleviate many of these complexities. Green energy, forms of transportation, water and pollution management, universal identification (ID), wireless Internet systems, and promotion of local commerce are examples of current of smart city initiatives.
The current technology needs of smart cities are served by what is called the “Internet of Things,” a term used to describe an overall network of devices with embedded unique identifiers. Example use cases for these devices include payment for items, and traffic management.
In London, a traffic management system known as SCOOT optimises green light time at traffic intersections by feeding back magnetometer and inductive loop data to a supercomputer, which can coordinate traffic lights across the city to improve traffic throughout.
Barcelona saved €75 million of city funds and created 47,000 new jobs in the smart technology sector by implementing a network of fiber optics throughout the city, providing free high-speed Wi-Fi that supports the IoT and further linking to the integration of smart water, lighting and parking management.
The Netherlands has tested the use of IoT-based infrastructure in Amsterdam, where traffic flow, energy usage and public safety are monitored and adjusted based on real-time data.
Meanwhile, in the United States, major cities like Boston and Baltimore have deployed smart trashcans that relay how full they are and determine the most efficient pick-up route for sanitation workers.
In 2015 India became one of the pioneers to openly enact a smart city mission across 12 of its cities. As governments across the world start to implement these initiatives, blockchain can provide the infrastructure necessary for transaction management.
Transparency and security core fundamentals of blockchain are two very important elements in a smart city implementation.Today, there are over a dozen smart cities, with less than a quarter that have an active large scale implementation of the use blockchain or distributed ledger technology.
The city of Dubai has already planned to become the first blockchain powered smart city by 2021 and the country of Estonia has been using variations of blockchain and distributed ledger technology to keep track of citizens since 2012.
Leading smart city developers like Hancom are already supplying products and services from core hardwares of IOT to actual Smart City development. Gapyeong Malang Malang Smart Ecosystem a 470 acre smart city development project is just one the many initiatives under the Hancom Group that will incorporate blockchain technology as the basis for smart city development.
The most recent project for the Group, is the development of the Atlanta based Augury Square. The Augury Square is a 30-acre project that will incorporate blockchain and the use of cryptocurrency accelerating the concept of digital currency usage into daily life activities for its residents.
Example use cases that will improve resident life across cities when implemented with blockchain are without bounds. Information captured and kept in a cloud a based infrastructure utilized by a smart city can be encoded through a blockchain system to ensure the privacy and security of data.
The use of blockchain for identification in a smart city can assist with proof of citizenship, voting for public office, and tax data. In addition to security and fraud measures, the elimination of paperwork under such a system connects right with the smart city initiative to manage and reduce pollution and waste.
Other typical services include the use of internet sensors to detect road maintenance or other general repairs, connection of home utilities and rent to the blockchain and well as healthcare services.
Blockchain healthcare networks which store protected health data information can be useful when considering emergency situations that involve individuals in a crisis, proving beneficial to certified first responders (MFS) in accessing pertinent medical information.
What’s most important to a smart city, however, is integration. None of the services mentioned above exist in a vacuum; they need to be put into a single system.
Blockchain provides the technology to unite them into a single system that can track all aspects combined.The Smart City Expo will take place in Barcelona, Spain, in November 2019. It aims to discuss the growing urbanization of the world with attributes of blockchain.
Chrissa McFarlane
Named as one of the top women, “leaving their mark on the MedTech field in health IT,” by Becker’s Hospital Review, Chrissa McFarlane is the Founder and CEO of Patiento... Read More-
Francisco Gimeno - BC Analyst Smart cities surely need the blockchain to help with the integration of technologies, services, systems, data handling, etc. We should easily see this. Our worry is how this is going to be used later. In authoritarian countries smart cities systems can be used to control citizens to "behave" accordingly, even if the tech helps to live better. In open political systems, however, both better life and citizen's empowerment will naturally happen.
-
-
Google has more computing power, data, and talent to pursue artificial intelligence than any other company on Earth—and it’s not slowing down. That’s why humans can’t, either.BY KATRINA BROOKER
The human brain is a funny thing. Certain memories can stick with us forever: the birth of a child, a car crash, an election day. But we only store some details—the color of the hospital delivery room or the smell of the polling station—while others fade, such as the face of the nurse when that child was born, or what we were wearing during that accident.
For Google CEO Sundar Pichai, the day he watched AI rise out of a lab is one he’ll remember forever.
“This was 2012, in a room with a small team, and there were just a few of us,” he tells me. An engineer named Jeff Dean, a legendary programmer at Google who helped build its search engine, had been working on a new project and wanted Pichai to have a look.
“Anytime Jeff wants to update you on something, you just get excited by it,” he says.
Pichai doesn’t recall exactly which building he was in when Dean presented his work, though odd details of that day have stuck with him. He remembers standing, rather than sitting, and someone joking about an HR snafu that had designated the newly hired Geoffrey Hinton—the “Father of Deep Learning,” an AI researcher for four decades, and, later, a Turing Award winner—as an intern.
The future CEO of Google was an SVP at the time, running Chrome and Apps, and he hadn’t been thinking about AI. No one at Google was, really, not in a significant way.
Yes, Google cofounders Larry Page and Sergey Brin had stated publicly 12 years prior that artificial intelligence would transform the company: “The ideal search engine is smart,” Page told Online magazine in May 2000.
“It has to understand your query, and it has to understand all the documents, and that’s clearly AI.” But at Google and elsewhere, machine learning had been delivering meager results for decades, despite grand promises.
[Illustration: Gabriel Silveira]
Now, though, powerful forces were stirring inside Google’s servers. For a little more than a year, Dean, Andrew Ng, and their colleagues had been building a massive network of interconnected computers, linked together in ways modeled on the human brain.
The team had engineered 16,000 processors in 1,000 computers, which—combined—were capable of making 1 billion connections. This was unprecedented for a computer system, though still far from a human brain’s capacity of more than 100 trillion connections.
To test how this massive neural net processed data, the engineers had run a deceptively simple experiment. For three days straight, they had fed the machine a diet of millions of random images from videos on YouTube, which Google had acquired in 2006. They gave it no other instructions, waiting to see what it would do if left on its own.
What they learned was that a computer brain bingeing on YouTube is not so different from a human’s. In a remote part of the computer’s memory, Dean and his peers discovered that it had spontaneously generated a blurry, overpixelated image of one thing it had seen repeatedly over the course of 72 hours: a cat.This was a machine teaching itself to think.
The day he watched this kind of intelligence emerge from Google’s servers for the first time, Pichai remembers feeling a shift in his thinking, a sense of premonition. “This thing was going to scale up and maybe reveal the way the universe works,” he says. “This will be the most important thing we work on as humanity.
”The rise of AI inside Google resembles a journey billions of us are on collectively, hurtling into a digital future that few of us fully understand—and that we can’t opt out of. One dominated in large part by Google. Few other companies (let alone governments) on the planet have the ability or ambition to advance computerized thought.
Google operates more products, with 1 billion users, than any other tech company on earth: Android, Chrome, Drive, Gmail, Google Play Store, Maps, Photos, Search, and YouTube. Unless you live in China, if you have an internet connection, you almost certainly rely on Google to augment some parts of your brain.Shortly after Pichai took over as CEO, in 2015, he set out to remake Google as an “AI first” company.
It already had several research-oriented AI divisions, including Google Brain and DeepMind (which it acquired in 2014), and Pichai focused on turning all that intelligence about intelligence into new and better Google products. Gmail’s Smart Compose, introduced in May 2018, is already suggesting more than 2 billion characters in email drafts each week.
Google Translate can re-create your own voice in a language you don’t speak. And Duplex, Google’s AI-powered personal assistant, can book appointments or reservations for you by phone using a voice that sounds so human, many recipients of the calls weren’t aware it was a robot, raising ethical questions and public complaints.
The company says it has always disclosed to consumers that the calls are coming from Google.
[Illustration: Gabriel Silveira]
The full reach of Google’s AI influence stretches far beyond the company’s offerings. Outside developers—at startups and big corporations alike—now use Google’s AI tools to do everything from training smart satellites to monitoring changes to the earth’s surface to rooting out abusive language on Twitter (well, it’s trying).
There are now millions of devices using Google AI, and this is just the beginning. Google is on the verge of achieving what’s known as quantum supremacy. This new breed of computer will be able to crack complex equations a million or more times faster than regular ones.
We are about to enter the rocket age of computing.Used for good, artificial intelligence has the potential to help society. It may find cures to deadly diseases (Google execs say that its intelligent machines have demonstrated the ability to detect lung cancer a full year earlier than human doctors), feed the hungry, and even heal the climate.
A paper submitted to a Cornell University science journal in June by several leading AI researchers (including ones affiliated with Google) identified several ways machine learning can address climate change, from accelerating the development of solar fuels to radically optimizing energy usage.Used for ill, AI has the potential to empower tyrants, crush human rights, and destroy democracy, freedom, and privacy.
The American Civil Liberties Union issued a report in June titled “The Dawn of Robot Surveillance” that warned how millions of surveillance cameras (such as those sold by Google) already installed across the United States could employ AI to enable government monitoring and control of citizens. This is already happening in parts of China.
A lawsuit filed that same month accuses Google of using AI in hospitals to violate patients’ privacy.
Every powerful advance in human history has been used for both good and evil. The printing press enabled the spread of Thomas Paine’s “Common Sense” but also Adolf Hitler’s fascist manifesto “Mein Kampf.
” With AI, however, there’s an extra dimension to this predicament: The printing press doesn’t choose the type it sets. AI, when it achieves its full potential, would be able to do just that.
Now is the time to ask questions. “Think about the kinds of thoughts you wish people had inventing fire, starting the industrial revolution, or [developing] atomic power,” says Greg Brockman, cofounder of OpenAI, a startup focused on building artificial general intelligence that received a $1 billion investment from Microsoft in July.
Parties on both the political left and right argue that Google is too big and needs to be broken up. Would a fragmented Google democratize AI? Or, as leaders at the company warn, would it hand AI supremacy to the Chinese government, which has stated its intention to take the lead? President Xi Jinping has committed more than $150 billion toward the goal of becoming the world’s AI leader by 2030.
Inside Google, dueling factions are competing over the future of AI. Thousands of employees are in revolt against their leaders, trying to stop the tech they’re building from being used to help governments spy or wage war.
How Google decides to develop and deploy its AI may very well determine whether the technology will ultimately help or harm humanity. “Once you build these [AI] systems, they can be deployed across the whole world,” explains Reid Hoffman, the LinkedIn cofounder and VC who’s on the board of the Institute for Human-Centered Artificial Intelligence at Stanford University.
“That means anything [their creators] get right or wrong will have a correspondingly massive-scale impact.”“In the beginning, the neural network is untrained,” says Jeff Dean one glorious spring evening in Mountain View, California.
He is standing under a palm tree just outside the Shoreline Amphitheatre, where Google is hosting a party to celebrate the opening day of I/O, its annual technology showcase.
This event is where Google reveals to developers—and the rest of the world—where it is heading next. Dean, in a mauve-gray polo, jeans, sneakers, and a backpack double-strapped to his shoulders, is one of the headliners. “It’s like meeting Bono,” gushes one Korean software programmer who rushed over to take a selfie with Dean after he spoke at one event earlier in the day.
“Jeff is God,” another tells me solemnly, almost surprised that I don’t already know this. Around Google, Dean is often compared to Chuck Norris, the action star known for his kung fu moves and taking on multiple assailants at once.“Oh, that looks good! I’ll have one of those,” Dean says with a grin as a waiter stops by with a tray of vegan tapioca pudding cups.
Leaning against a tree, he speaks about neural networks the way Laird Hamilton might describe surfing the Teahupo’o break. His eyes light up and his hands move in sweeping gestures.
“Okay, so here are the layers of the network,” he says, grabbing the tree and using the grizzled trunk to explain how the neurons of a computer brain interconnect.
He looks intently at the tree, as though he sees something hidden inside it.Last year, Pichai named Dean head of Google AI, meaning that he’s responsible for what the company will invest in and build—a role he earned in part by scaling the YouTube neural net experiment into a new framework for training their machines to think on a massive scale.
That system started as an internal project called DistBelief, which many teams, including Android, Maps, and YouTube, began using to make their products smarter.But by the summer of 2014, as DistBelief grew inside Google, Dean started to see that it had flaws.
It had not been designed to adapt to technological shifts such as the rise of GPUs (the computer chips that process graphics) or the emergence of speech as a highly complex data set.
Also, DistBelief was not initially designed to be open source, which limited its growth. So he made a bold decision: Build a new version that would be open to all. In November 2015, Pichai introduced TensorFlow, DistBelief’s successor, one of his first big announcements as CEO.
It’s impossible to overstate the significance of opening TensorFlow to developers outside of Google. “People couldn’t wait to get their hands on it,” says Ian Bratt, director of machine learning at Arm, one of the world’s largest designers of computer chips. Today, Twitter is using it to build bots to monitor conversations, rank tweets, and entice people to spend more time in their feed.
Airbus is training satellites to be able to examine nearly any part of the earth’s surface, within a few feet. Students in New Delhi have transformed mobile devices into air-quality monitors. This past spring, Google released early versions of TensorFlow 2.0, which makes its AI even more accessible to inexperienced developers.
The ultimate goal is to make creating AI apps as easy as building a website. TensorFlow has now been downloaded approximately 41 million times. Millions of devices—cars, drones, satellites, laptops, phones—use it to learn, think, reason, and create.
An internal company document shows a chart tracking the usage of TensorFlow inside Google (which, by extension, tracks machine learning projects): It’s up by 5,000% since 2015.Tech insiders, though, point out that if TensorFlow is a gift to developers, it may also be a Trojan horse.
“I am worried that they are trying to be the gatekeepers of AI,” says an ex-Google engineer, who asked not to be named because his current work is dependent on access to Google’s platform.
At present, TensorFlow has just one main competitor, Facebook’s PyTorch, which is popular among academics. That gives Google a lot of control over the foundational layer of AI, and could tie its availability to other Google imperatives. “Look at what [Google’s] done with Android,” this person continues.
Last year, European Union regulators levied a $5 billion fine on the company for requiring electronics manufacturers to pre-install Google apps on devices running its mobile operating system. Google is appealing, but it faces further investigations for its competitive practices in both Europe and India.
By helping AI proliferate, Google has created demand for new tools and products that it can sell. One example is Tensor Processing Units (TPUs), which are integrated circuits designed to accelerate applications using TensorFlow.
If developers need more power for their TensorFlow apps—and they usually do—they can pay Google for time and space using these chips running in Google data centers.TensorFlow’s success has won over the skeptics within Google’s leadership.
“Everybody knew that AI didn’t work,” Sergey Brin recalled to an interviewer at the World Economic Forum in 2017. “People tried it, they tried neural nets, and none of it worked.” Even when Dean and his team started making progress, Brin was dismissive.
“Jeff Dean would periodically come up to me and say, ‘Look, the computer made a picture of a cat,’ and I said, ‘Okay, that’s very nice, Jeff,’ ” he said. But he had to admit that AI was “the most significant development in computing in my lifetime.
”Stage 4 of the Shoreline Amphitheatre fits 526 people, and every seat is taken. It’s the second day of I/O, and Jen Gennai, Google’s head of responsible innovation, is hosting a session on “Writing the Playbook for Fair and Ethical Artificial Intelligence and Machine Learning.
” She tells the crowd: “We’ve identified four areas that are our red lines, technologies that we will not pursue. We will not build or deploy weapons. We will also not deploy technologies that we feel violate international human rights.
” (The company also pledges to eschew technologies that cause “overall harm” and “gather or use information for surveillance, violating internationally accepted norms.”) She and two other Google executives go on to explain how the company now incorporates its AI principles into everything it builds, and that Google has a comprehensive plan for tackling everything from rooting out biases in its algorithms to forecasting the unintended consequences of AI.
After the talk, a small group of developers from different companies mingles, dissatisfied. “I don’t feel like we got enough,” observes one, an employee of a large international corporation that uses TensorFlow and frequently partners with Google.
“They are telling us, ‘Don’t worry about it. We got this.’ We all know they don’t ‘got this.’ ”These developers have every right to be skeptical. Google’s rhetoric has often contrasted with its actions, but the stakes are higher with artificial intelligence. Gizmodo was first to report, in March 2018, that the company had a Pentagon contract for AI drone-strike technology, dubbed Project Maven.
After Google employees protested for three months, Pichai announced that the contract would not be renewed. Shortly thereafter, another project came to light: Dragonfly, a search engine for Chinese users designed to be as powerful and ubiquitous as the one reportedly used for 94% of U.S. searches, except that it would also comply with China’s censorship rules, which ban content on some topics related to human rights, democracy, freedom of speech, and civil disobedience.
Dragonfly would also link users’ phone numbers to their searches. Employees protested for another four months, and activists attempted to enlist Amnesty International and Google shareholders in the fight. Last December Pichai told Congress, Google has no plans to launch the search engine in China.
[Illustration: Gabriel Silveira]
During that turmoil, a Google engineer confronted Dean directly about whether the company would continue working with oppressive regimes. “We need to know: What are the red lines?” the engineer tells me, echoing Google’s own verbiage. “I was pushing for: What are things you would never do? I never got clarification.” The employee quit in protest.
When asked today about the dark side of AI, the amiable Dean turns serious. “People in my organization were very outspoken about what we should be doing with the Department of Defense,” he says, referring to their work on Maven. Dean invokes Google’s list of AI applications that it won’t pursue.
“One of them is work on autonomous weapons. That, to me, is something I don’t want to work on or have anything to do with,” he says, looking me straight in the eyes.
Amid the initial Project Maven controversy, The Intercept and The New York Times published emails that revealed Google’s internal concerns about how the extent of its AI ambitions might be received.
“I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons,” Fei-Fei Li, then Google Cloud’s chief AI scientist (and one of the authors of Google’s AI principles), told colleagues in one of them.
“Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI—if not THE most. This is red meat to the media to find all the ways to damage Google.
” She also suggested that the company plant some positive PR stories about Google’s democratization of AI and something described as humanistic AI. “I’d be super careful to protect these very positive images,” she wrote. (Li declined to be interviewed for this story.
She has since left the company to co-lead Stanford’s Human-Centered AI Institute.) These AI protests have created an ongoing PR crisis. In March, the company announced an Advanced Technology External Advisory Council, colloquially known as its AI ethics board, but it fell apart just over a week later when thousands of Google employees protested its makeup.
The board had included a drone-company CEO and the president of the right-wing Heritage Foundation, who had made public statements that were transphobic and denied climate change.Pichai himself has stepped in several times.
Last November, he wrote to employees, acknowledging Google’s missteps. “We recognize that we have not always gotten everything right in the past and we are sincerely sorry for that,” he said. “It’s clear we need to make some changes.” But controversy continues to dog Google on how it deploys technology.
In August, an employee organization called Googlers for Human Rights released a public petition with more than 800 signatures asking the company not to offer any tech to Customs and Border Protection, Immigration and Customs Enforcement, or the Office of Refugee Resettlement. (A representative for Google responds that the company supports employee activism.)
When I ask Pichai about how Google’s AI principles influence his own work, he connects it to another corporate priority: assuaging concerns about what Google does with all the user data it possesses.
“What I am pushing the teams on is around AI and privacy,” he says. “It’s a bit counterintuitive, but I think AI gives us a chance to enhance privacy.” Last spring he discussed efforts within Google to use machine learning to protect data on a smartphone from being accessed by anyone other than its owner.He says fears about the dangers of AI are overblown.
“It’s important for people to understand what not to worry about, too, which is, it’s really early, and we do have time,” he explains.
Pichai hopes that Google can quell any disquiet over AI’s dangers by showcasing its virtue. Under an initiative dubbed AI for Social Good, Google is deploying its machine learning to solve what it describes as “the world’s greatest social, humanitarian, and environmental problems.
” There are teams harnessing AI to forecast floods, track whales, diagnose cancer, and detect illegal mining and logging. At I/O, one young entrepreneur from Uganda, invited by Google, spoke of using TensorFlow to track army worms across Africa, a cause of famine throughout the continent. Google’s AI Impact Challenge, launched in 2018, offers $25 million in grants to charities and startups applying AI to causes such as saving rain forests and fighting fires.
The company has also pulled back on two controversial initiatives amid the AI debate. Last December, Google shelved its facial-recognition software, even as rival Amazon moved forward with its own version despite its own employee protests and charges that it enables law enforcement to racially profile citizens.
One insider estimates that the move could cost Google billions in revenue. The company also withdrew from bidding on a $10 billion project to provide cloud computing to the Pentagon, citing ethical concerns. Amazon and Microsoft are still in the running.
When asked how Google determines whether a project is good or bad for society, Pichai cites something called “the lip-reading project.” A team of engineers had an idea to use AI in cameras to read lips. The intention was to enable communication for nonverbal people. However, some raised concerns about unintended consequences.
Could bad actors use it for surveillance through, say, street cameras? The engineers tested it on street cams, CCTV, and other public cameras, and determined that the AI needs to be close-up to work. Google published a paper detailing the effort, confident that, for now, it can be used safely.
It’s a sunny afternoon in Santa Barbara, California, but the thermometer inside Google’s lab reads 10 millikelvin, about 1/100th of a kelvin above absolute zero. “This is one of the coldest places in the universe,” Erik Lucero, a research scientist working in the lab, tells me. “Inside of this,” he says, pointing to a shiny metal container, “is colder than space.
” The vessel is the size and shape of an oil drum, made of copper and plated with real gold. Thick wires made out of niobium-titanium emerge from the top, octopus-like, carrying control and measurement signals to and from its processor.This barrel encases one of the most fragile and potentially most powerful machines on earth: a quantum computer.
If all goes as planned, it will turbocharge the capabilities of artificial intelligence in ways that may well reshape how we think about the universe—and humanity’s place in it.
The dream of quantum computing has been around since the ’80s, when Richard Feynman, an original member of the Manhattan Project, which built the atomic bomb, began theorizing ways to unlock computing power by adapting the quantum mechanics used to create nuclear science.
Today, our computers run on bits of information that equal either zero or one in value; they have to calculate outcomes, probabilities, and equations step-by-step, serially exhausting every option before arriving at an answer. Quantum computers, by contrast, create qubits, where zeros and ones can exist simultaneously.
This allows qubits to process certain kinds of information far faster. How much faster? One widely cited example is that a 300-qubit computer could perform as many simultaneous calculations as there are atoms in the universe.
“Those are actually qubits,” Lucero says, directing me to look under a microscope, where I see some fuzzy black Xs. There are 22 of them.
This is the smaller batch. Elsewhere in the lab, Google has created 72 qubits. For now, they can only survive for 20 microseconds, and conditions have to be colder than outer space.
In order to create a commercially viable quantum computer, Google will need to produce enough qubits and keep them stable and error-free long enough to be able to make any major computing breakthroughs.
Other labs are competing here, too, but Google has assembled some of the world’s foremost experts to find ways to create an environment in which qubits can survive and thrive. It’s moving faster toward this goal than anyone expected: Last December, Google tested its best quantum processor against a regular laptop, and the laptop won.
A few weeks later, after some adjustments to the processor, it beat the laptop, but still lagged behind a desktop computer. In February, the quantum computer outmatched every other computer in the lab.
Hartmut Neven, who leads Google’s quantum team, presented the lab’s advances during Google’s Quantum Spring Symposium in May, describing the increases in processing power as double exponential, a mind-bending equation that looks like this:
221, 222, 223, 224
Within computer science circles, this growth rate for quantum computing has been dubbed Neven’s law, a nod to Moore’s law, which posits that “classical” computing advances by doubling the number of transistors that can fit on a chip every 18 months.
Now Google’s team is honing in on the major milestone known as quantum supremacy. It will still be years before Google’s quantum computer reaches its full potential. But in the lab, the anticipation of this moment is palpable.
“There are currently problems that humanity [will] not be able to solve without a quantum computer,” Lucero says, standing next to the machine poised to achieve this feat. “The whole idea that you are jumping into a new potential for humankind, that’s exciting.
”The room hums rhythmically, the sound of qubits hatching. What will it mean for humanity when computers can think and calculate at exponentially faster speeds—and on parallel planes? This emerging science may be able to explain the deepest mysteries of the universe—dark matter, black holes, the human brain.
“It’s the ‘Hello, World!’ moment,” Lucero says, referring to the 1984 introduction of Macintosh, the computer that launched a new era for a generation of coders. As Google opens the door to this new cosmos, we all need to get ready for what’s on the other side.
A version of this article appeared in the October 2019 issue of Fast Company magazine.-
Francisco Gimeno - BC Analyst Any SF enthusiast reading this article will remember Asimov's UNIVAC global computer which is sometimes a servant of humanity and sometimes the reason for its end, or an universal dictator. Always for the good of humanity of course. Google's work up to now is already thinking not just on how to create a working AI, but on how to deal with the consequences of having a non human intelligence which potentially can be more powerful than ours.
-
-
Digital Twins: why virtual replicas of assets create real business value | IoT S... (iotsworldcongress.com)The idea is not new. It goes back to computer-aided design representations of things, say models to evaluate “what if” scenarios. But Artificial Intelligence, advanced data analytics and IoT to support specific business outcomes add a new dimension to these models.
Now, they’re called Digital Twins and they are among Gartner’s top ten strategic trends for 2019. For good reason. They provide an exact digital replica of a physical object, system or process. So, when paired with engineering simulations, Digital Twins can answer “then-what” and “what-if” questions that would be expensive or difficult to pose with the physical product.
As a result, they are already being adopted in a variety of industries, particularly in asset-heavy sectors, such as aerospace, oil and gas, automotive and industrial products as they enable modeling, simulations, testing and monitoring based on the data collected by IoT sensors.Examples are numerous.
Imagine that you need to know whether your device’s performance changes if you make a part with a different material, or more concretely, whether a seat in an automobile is likely to fail security tests in certain conditions. Imagine that you want to improve the positioning of factory robots in a production line and eliminate inefficient movement.
Digital Twins can give you the answers you need to jump in the deep end and evolve your business.Gartner predicts that by 2021, 50% of large industrial companies will use Digital Twins, resulting in a 10% improvement in effectiveness.
It also makes clear that the focus today is on Digital Twins in the IoT. Deloitte says that the global market for Digital Twins is expected to grow 38% annually to reach $16 billion by 2023.New avenues to revenue
In any case, the adoption of these versatile avatars is spreading. And it does make sense as “the ultimate purpose of a Digital Twin is to enable business outcomes, be it with existing products within the established value chain, or new products and services within new emerging ecosystems”, as Deloitte’s partner Maximilian Shroeckstressed at IoT Solutions World Congress (IoTSWC) 2018.
“Digital Twins offer new avenues to revenue for enterprises either through services or new solution stacks and allow for monetization of data and insight in completely new forms,” he said.
But where to start? Mark Gallant, Senior Director of IoT Solutions at PTC, put it this way:
“if you’re making high complexity products at lower volume, you may certainly start with the product as a Digital Twin. If you’re making a lower complexity product at very high volume, you’re going to start with the machines and the process.
So, your Digital Twin is going to start there.” Quite simply, it really depends on your business.
All in all, the whole process is a team effort as the implication of using Digital Twin technologies is equally transformational for the enterprise. This is the reason why IoTSWC 2019 offers several sessions on this topic, particularly on how to power Digital Twin applications to create intelligent systems that perform better and deliver improved products and services.Applying human knowledge
In this sense, Teresa Tung, Managing Director at Accenture Labs and speaker at IoTSWC 2019, considers that the role of human expertise is critical in fulfilling the promise of the Digital Twin. Thus, a critical phase in this technology’s maturity lies in capturing and applying human knowledge to complement AI and automation technologies.
As Mark Gallant also highlighted at IoTSWC 2018, a 30 years experienced floor manager perfectly knows when to adjust an important machine because it doesn’t sound right. His conclusion was clear: don’t be afraid to listen to what your team has to say and be prepared to address the issues raised by different parties interacting with the data.
All the more so as there will be billions of things represented by Digital Twins within the next five years. This is not a small project and it poses several challenges related to distributed data management from edge to cloud, security threads and data ethics issues. It’s not an easy feat to pull out.
Yet companies looking to stay ahead of their competitors need to consider the implementation of Digital Twins if they want to make data-driven decisions and experiment with future scenarios to drive innovation. In fact, most companies cannot afford not to have Digital Twins anymore, says Teresa Tung.
In a sense, this might change the very definition of a product, but this is another matter. AUTHOR: Anna Solana, IOTSWCSOURCE: IOT Solutions World Congress
IOTSWC 2019-
Francisco Gimeno - BC Analyst Many global companies have departments of Foresight and Future Planning. "Digital Twins" for many sectors could be an awesome and exciting proposal to develop this in a digital era, more when IoT and other techs continue growing exponentially. Maybe in the near future we humans (already living both in a digital/real world) will have also a "digital twin", opening new horizons on Homo sapiens "Digitalis"' evolution!
-
-
Israeli scientists reveal world’s first 3D print of heart with blood vesse... (jewishnews.timesofisrael.com)
A team of researchers in Israel has produced the world’s first 3D print of a heart made with human tissue, calling the feat “a major medical breakthrough.”The 3D print was realised by scientists at Tel Aviv University, who hope to one day create hearts and patches suitable for human transplant.
“At this stage, our 3D heart is small, the size of a rabbit’s heart,” explained Professor Dvir. “But larger human hearts require the same technology.
It was the first time an entire human heart with cells and blood vessels had been printed successfully, according to Tal Dvir, who led the project.
“People have managed to 3D-print the structure of a heart in the past, but not with cells or with blood vessels,” he added.Researchers took a biopsy of fatty tissue from patients, which they used to develop the ink needed for the 3D print.
Using the patient’s tissue reduces the risk of an implant being rejected, Dvir said. The scientists unveiled their findings on Monday, which were published in the peer-reviewed journal Advanced Science.They will now focus on developing and teaching the printed hearts to “behave” like hearts, Dvir said.
“The cells need to form a pumping ability. They can currently contract, but we need them to work together,” he added. “Maybe, in ten years, there will be organ printers in the finest hospitals around the world, and these procedures will be conducted routinely.”-
Francisco Gimeno - BC Analyst The 4th IR is not just the blockchain, cryptos, AI.... it is about humanness, growing to use new technologies which are by themselves, and also enmeshed with each other, beneficial for our society, people and the world. 3D printing for transplants sounds "Star Trek" future, but is not. It is already here, as the new medical tech to heal paralysed bodies, and heal minds. Exciting times.
-
-
Technologies such as 5G, IoT sensors and platforms, edge computing, AI and analytics, robotics, blockchain, additive manufacturing and virtual/augmented reality are coalescing into a fertile environment for the Industrial Internet of Things (IIoT), which is set to usher in what's often described as the Fourth Industrial Revolution or Industry 4.0.
Here's how analyst firm IoT Analytics sees the relationship between the broader IoT and the IIoT/Industry 4.0 sector:
Special Report: The Rise of Industrial IoT (free PDF)
This ebook, based on the latest ZDNet / TechRepublic special feature, explores how infrastructure around the world is being linked together via sensors, machine learning and analytics.
Read More
In this brave new world, supply chains will have end-to-end transparency thanks to sensors, data networks and analytics capabilities at key points. All other things (trade barriers, for example) being equal, parts and raw materials will arrive just in time at highly automated factories, and the fate of the resulting products will be tracked throughout their lifetimes to eventual recycling.
Similarly, 'smart farms' will combine emerging IIoT-related technologies into integrated high-resolution crop production systems based on robotics, big data and analytics.
As a result, businesses deploying IIoT systems will see increased operational efficiency, will reduce their environmental impact, and will have better information on which to base their future plans.
That's the theory anyway, but how are things progressing in practice? This guide examines available information on IIoT adoption, market size estimates, startup activity and IIoT platforms. See the rest of ZDNet's special report for more detail on other important areas, including security.Who is deploying IIoT solutions?
The State of the Industrial Internet of Things (PTC)
PTC is a leading IIoT software platform provider, and this research, published in February 2018, was based on data gathered from the company's customer base.
The Americas led the way on IIoT adoption in PTC's survey (45%), followed by EMEA (33%) and Asia Pacific (22%). The leading sectors deploying IIoT solutions were industrial products (25%) followed by electronics & high-tech (23%) and automotive (13%):
Image: PTC"
These industries lead the IoT charge because they have complex manufacturing and operational processes, along with high-capital equipment, that can benefit greatly from IoT solutions and data-driven insights that drive more sustainable, resilient, and efficient processes," the report said.
The majority of IIoT-adopting companies were larger organisations (58% had revenues over $500 million), although a third (31%) were smaller, presumably nimbler, companies with revenues of less than $100m.IIoT use cases in PTC's survey encompassed manufacturing/operations, service, product design and IT:
Image: PTC
"The most predominant use cases employ the IoT for manufacturing operational intelligence and operational asset monitoring. These smart, connected capabilities help product manufacturers increase throughput, improve production quality, and reduce manufacturing costs," the report said.
IIoT deployments generate vast amounts of data, and decisions need to be made about where best to store and analyse that data. PTC's report noted that factories and hospitals, for example, might favour on-premises deployments due to the need for security and low-latency response, while for smart cities, transportation or oil-and-gas the scalability of the cloud could win the day.
At the time of PTC's survey, on-premises deployments outnumbered cloud deployments (62% versus 38%).In conclusion, PTC stated that "IoT is no longer a wait and see technology; companies must act now or risk being left behind," noting that 83 percent of adopters in its survey planned to move their deployments from proof-of-concept to full-scale production environments within 12 months.
Industrial IoT on Land and at Sea (Inmarsat)
Image: InmarsatInmarsat's 2018 research looked at the adoption of the IIoT in the agriculture, energy, maritime, mining and transport sectors (with an emphasis, naturally, on the role of satellite connectivity as an enabling technology).
Market researchers Vanson Bourne interviewed 750 respondents with decision-making or influencing responsibilities for IIoT initiatives in their organisations, which covered the Americas, EMEA and APAC, and had at least 500 employees. (An exception was the maritime sector, where 45 percent of organisations had less than 500 employees.) Here are the headline findings.
Data: Inmarsat / Chart: ZD
NetNearly half (46%) of businesses reported either fully deploying (21%) or trialling (25%) IIoT solutions, with the main drivers being resource efficiency, improving health & safety and monitoring environmental change.
The main barrier to IIoT adoption was skills, followed by a lack of turnkey/off-the-shelf solutions, higher-than-expected costs, and security. Specific IIoT-related skills identified as lacking were security (56%), analytical/data science (48%) and tech support (42%), with decision-making, management, planning, database management and customer service also mentioned in dispatches.
On the security front, nearly two-thirds (65%) of respondents recognised that their IIoT defences should be stronger, with external cyber-attacks, poor network security and misuse of data by employees being the biggest challenges.
Data is the key to IIoT-based transformation, and Inmarsat's respondents flagged up cost-saving and efficiency opportunities, productivity monitoring and health-and-safety improvements as current data-related concerns. Going forward, better decision-making, increased internal data visibility and greater supply-chain insights were seen as important potential benefits.
Unsurprisingly, the maritime sector placed the most importance on satellite communications (Inmarsat's core business) in their IIoT deployments. RFID and Bluetooth LE headed up the list of other connectivity technologies used by survey respondents.
A quarter of Inmarsat's respondents expected to spend more than 10 percent of their IT budgets on IIoT solutions over the next three years, and overall were expecting good returns on their investments: "a 10 percent reduction in costs and a five per cent lift in turnovers expected at the end of this period, and more by 2023," the report said.
Data: Inmarsat / Chart: ZD
NetFinally, to assess the level of readiness for IIoT solutions, Inmarsat scored survey respondents on six key areas (adoption, security, connectivity, skills, data and investment/ROI) and divided them into categories (laggards, starters, progressives and leaders). On this basis, the maritime and transport sectors are leading the way on IIoT, with mining bringing up the rear:
Data: Inmarsat / Chart: ZDNet
Inmarsat has recently announced a collaboration agreement with Microsoft combining the former's satellite communications network with the latter's cloud-based Azure IoT Centralplatform.
Data generated by IIoT infrastructure, wherever it may be located, will be transferred via satellite to Azure IoT Central for analysis. This tie-up will focus initially on IIoT solutions for the agriculture, mining, transportation and logistics sectors, says Inmarsat.The IIoT market
Given the number of core and supporting technologies in the IIoT, and the need for manufacturing businesses of all sizes to keep up with digital transformation or fall behind their competitors, it's no surprise to find that analysts are forecasting impressive (if somewhat varying) growth for this market over the next few years:
Source Current market size2023 forecastCAGRCoverageHighlights
Research & Markets (
May 2018) $214bn10 industries, 10 technologies, 4 revenue sources, 5 regional and 22 national markets, offering 2016-2017 estimates and 2018-2023 forecasts and analyses for each The major winners might be those that control Industry 4.0 Platforms -- software layers that syndicate various devices, information and services, on top of which other firms can build their own offerings
Markets & Markets
(June 2018) $64bn (2018)
$91.4bn 7.34%
Devices & technology (sensor, RFID, industrial robotics, DCS, condition monitoring, smart meter, camera system, networking technology), software (PLM, MES, SCADA), verticals and geographies IIoT market for smart beacons technology to grow at a high rate between 2018 and 2023;
Manufacturing vertical to hold largest share of IIoT market in 2018; The IIoT market for the agriculture vertical is likely to grow at the highest CAGR between 2018 and 2023; IIoT market in APAC to grow at highest rate during forecast period
Zion Market Research
(July 2018) $145.8bn (2017) $232.1bn 8.06%
Components (sensors, industrial robotics, Distributed Control Systems [DCS], condition monitoring, camera systems, smart meters), software [Product Lifecycle Management [PLM] Systems, Manufacturing Execution System [MES], SCADA systems, distribution management systems), verticals (manufacturing, utilities, oil & gas, metals & mining, retail, healthcare, transportation & logistics)
The camera system segment is expected to grow at highest CAGR in the forecasted period; In the software segment, the demand for distribution systems has increased over the period of time due to increasing use of distribution systems in transportation and logistics;
The manufacturing sector is expected to witness moderate growth in the IIoT market due to the adoption of advanced robotics and cloud robotics in manufacturing practices IoT Analytics
(November 2018) $64bn (2018) $310bn 37% 6 connected industry building blocks, 6 supporting technologies, 12 use cases, 7 regionsKey findings include 9 disruptive trends that have the potential to fundamentally change existing industrial network architectures, business models, and technology stacks.
Three of the nine disruptive trends relate to the traditional 5-layer automation pyramid. One such trend is for I/O and PLC hardware to bypass the traditional automation pyramid and instead connect to the cloud either directly or via industrial gateways IDC (January 2019) $329bn (2019)
The Worldwide Semiannual Internet of Things Spending Guide forecasts IoT spending for 14 technology categories and 82 named use cases across 20 industries in nine regions and 53 countries
The industries that are forecast to spend the most on IoT solutions in 2019 are discrete manufacturing ($119 billion), process manufacturing ($78 billion), transportation ($71 billion), and utilities ($61 billion) Here's how the November 2018 report from IoT Analytics sees the Industry 4.0 (I4.0) market growing between 2017 and 2023:
Image: IoT Analytics
Among the 12 use cases identified by IoT Analytics, the largest in terms of market size will be Advanced Digital Product Development, while the biggest growth rates between 2018 and 2023 will be for Additive Production (i.e. industrial-scale 3D printing) and Augmented Operations:
Image: IoT AnalyticsIn a statement, Matthew Wopata, the report's main author and IoT Analytics' lead expert for Industrial IoT said:"Advanced digital product development emerged as the largest use case for I4.0 technologies as companies are using additive manufacturing, AR/VR, and digital twin technologies to reduce product development costs and time to market. Other large use cases such as data-driven quality control, predictive maintenance, and data-driven asset/plant performance optimization will continue to grow in popularity as manufacturers use I4.0 technologies to improve their operational KPIs, such as OEE [Overall Equipment Effectiveness]. The leading vendors of I4.0 solutions are hyper-focused on customer pain points/use cases and ensure that the data-driven insights generated from I4.0 solutions lead to measurable improvements and tangible ROIs."
IIoT industry associations
There are a number of industry associations relevant to the IIoT, and on 31 January 2019 two of the leading bodies -- the Industrial Internet Consortium (IIC) and the OpenFog Consortium-- announced that they had united.
The IIC's goal is to accelerate the industrial internet in five areas: utilising existing and creating new use cases and testbeds for real-world applications; delivering best practices, reference architectures, case studies and standards requirements; influencing the development of global standards for internet and industrial systems; facilitating open forums to share and exchange real-world ideas, practices, lessons and insights; and building confidence around new and innovative approaches to security.
The OpenFog Consortium's raison d'etre is defined thus: "Our efforts will define an architecture of distributed computing, network, storage, control and resources that will support intelligence at the edge of IoT, including autonomous and self-aware machines, things, devices, and smart objects. OpenFog members will also identify and develop new operational models.
Ultimately, our work will help to enable and drive the next generation of IoT."According to the IIC/OpenFog merger statement, "the organizations will work together under the IIC umbrella to drive the momentum of the industrial internet, including the development and promotion of industry guidance and best practices for fog and edge computing.
""This agreement brings together the two most important organizations shaping the Industrial Internet of Things. The combined organization offers greater influence to members, more clarity to the market, and a lower-risk path to the future for end users.
We will be the center of gravity for the future of Industrial IoT systems across industry verticals," said Stan Schneider, CEO of Real-Time Innovations (RTI) and vice-chair of the IIC Steering Committee in a statement.
Other IIoT-related bodies (listed by IoT Anaytics) include: Plattform Industrie 4.0, Labs Network Industrie 4.0, OPC Foundation, Industrial Data Spaces Association, CyberValley of Baden Württemberg, Center for the Development and Application of Internet of Things Technologies, and Manufacturing USA.IIoT platforms
The IIoT represents the convergence of operational technology (OT) and enterprise IT systems, the potential benefits being improved asset management and operational visibility. IIoT software platforms need to enable these benefits and interface with enterprise systems, and need to do so securely.
Gartner's first Magic Quadrant for Industrial IoT Platforms (May 2018) included the requirement that "the product must be available as both a cloud industrial IoT platform and an on-premises deployment".
This raised eyebrows because several companies with "significant brand equity associated with IIoT" -- including Bosch, GE Digital, Microsoft, Schneider Electric and Siemens -- were excluded from consideration for lacking the on-premises component.
"Simply put, the culture of industrial engineers, while changing, places high trust in what they can touch and control," Gartner explained. "An on-premises deployment of an IIoT platform is the genesis of forming trust.
"Gartner also required vendors to "develop, market and sell IIoT platforms as asset-agnostic, horizontal middleware that is salable as a stand-alone offering", in order to "ensure broad availability and usefulness for industrial enterprises conducting due diligence". This also ruled out some large and important manufacturers.
All this helps to explain the somewhat sparse nature of Gartner's inaugural IIoT Magic Quadrant, and the fact that it's devoid of any entries in the Leaders and Challengers quadrants:
Image: Gartner
PTC, SAP and Hitachi were classified as Visionaries, while the remaining eight vendors -- some big names among them -- were seen as Niche Players.
For Gartner, PTC's strength is in its core applications for product lifecycle management (PLM), computer-aided design (CAD) and service lifecycle management (SLM).
The company focuses on solutions for asset monitoring, predictive maintenance and operational excellence.SAP's Leonardo is a multi-cloud (AWS, Google, Microsoft) platform with a separate on-premises solution.
According to Gartner, Leonardo is best suited to SAP customers seeking to combine IT/OT integration with SAP's IoT applications, asset intelligence network and legacy industry applications.
Hitachi Vantara's Lumada platform is best for industrial environments involving Hitachi equipment, says Gartner, where customers can leverage prebuilt functionality for edge device interaction and off-the-shelf 'solution cores' that address requirements such as industrial asset monitoring, maintenance, scheduling, quality, safety and productivity.
The IIoT platform market was examined in The Forrester Wave: Industrial IoT Software Platforms, Q3 2018, with somewhat different results. For Forrester, an IIoT software platform must do five things: (1) create the link between industrial machinery and digital systems; (2) protect IoT devices and data from attack; (3) control the provisioning, maintenance and operation of IoT devices; (4) transform data into timely, relevant insight and action; and (5) create applications and integrate with enterprise systems.
Where Forrester parts company with Gartner is in its judgement that 'the place to be' is the public cloud:"Practical considerations about providing connectivity to remote locations, plus a general suspicion about the security, capability, and trustworthiness of startup-obsessed public cloud providers, led the early entrants in the industrial IoT space to invest in building their own networks of data centers. Those days are behind us. All of the evaluated vendors retain some ability to deploy in private data centers, but the direction of travel is clear: They, and their customers, are headed to the cloud."
Other changes since Forrester first evaluated IoT platforms in 2016 include: modern API-backed user interfaces; analytics, plus machine learning and AI, as a core component; increased data flows with other enterprise systems (ERP, CRM, service desk); digital twins, with augmented reality on the way; and solutions focused on use cases such as predictive maintenance.
Forrester included 15 vendors in its evaluation: Amazon Web Services, Atos, Bosch, C3 IoT, Cisco, GE Digital, Hitachi, IBM, Microsoft, Oracle, PTC, SAP, Schneider Electric, Siemens and Software AG.Judged on their current offerings, strategies and market presence, IBM, C3 IoT, Microsoft, SAP and PTC and emerged as Leaders:
Image: ForresterAccording to Forrester:
IBM's Watson IoT Platform offers 'extensive analytics with industry-specific and services expertise'. C3 IoT 'leaves management of things to partners and differentiates with analytics'; Microsoft's Azure IoT 'offers enabling cloud infrastructure and more' -- including development tools, advanced analytics capabilities, augmented reality (HoloLens) and edge computing; SAP Leonardo 'encompasses IoT as well as other digital innovation technologies' — including machine learning, blockchain and big data; and PTC'fuses device connectivity strength with augmented reality vision'.IIoT players
There are many more companies involved in the IIoT ecosystem than just software platform vendors, of course. For its Industry 4.0 & Smart Manufacturing 2018-2023 report, IoT Analytics identified over 300 companies that deliver products and services driving the fourth industrial revolution.
The analyst firm divides these into suppliers of 'Connected industry building blocks' and suppliers of 'Other Industry 4.0 supporting technologies':
Image: IoT AnalyticsAnalytics and connectivity hardware lead the way in the building blocks category, while additive manufacturing (industrial 3D printing) and AR/VR head up the supporting technologies:
Data: IoT Analytics / Chart: ZDNet
Companies highlighted in each category were: Microsoft (hosting); Microsoft, General Electric, PTC and Siemens (IIoT platforms); Uptake (analytics); Nvidia (microchips); Festo (sensors); HMS (connectivity hardware); Claroty (cybersecurity); Accenture (systems integrators); General Electric (additive manufacturing); Upskill (augmented and virtual reality); ABB (collaborative robots); Cognex (connected machine vision); PINC (drones/UAVs); and Clearpath Robotics (self-driving [material transport] vehicles).
Digital Twin: A key emerging IIoT technologyAs noted earlier, a number of emerging technologies are creating suitable conditions for the adoption of IIoT solutions, including 5G, edge computing, AI and analytics, robotics, blockchain, additive manufacturing and VR/AR. One that deserves special mention is the digital twin, which can be defined as a virtual representation of a real-world entity or process.
With real-world 'things' modelled in software and fed with real-time sensor data, engineers can head off potential problems and use simulations to optimise performance.Digital twin technology is firmly at the 'peak of inflated expectations' in Gartner's 2018 Hype Cycle for Emerging Technologies, so it's worth looking at some recent research into the current state of play.
Digital marketing specialist Reboot Online has analysed data from research facilities provider Catapult on digital twin technology. Catapult's information comes from a survey of 150 engineers and is reported in more detail in its report Feasibility of an immersive digital twin:
The definition of a digital twin and discussions around the benefit of immersion.The key components of a digital twin, according to the engineers, are a physical asset, live and offline data sets, a 3D representation and real-time simulation:
Data: Catapult-Reboot Online / Chart: ZDNet
When it comes to the value of digital twins in the product life cycle, maintenance, repair & operations and manufacturing are the clear leaders, followed by simulation and quality control:
Data: Catapult-Reboot Online / Chart: ZDNet
"We are in an era of rapid technological developments. At the forefront of that has been the rise and evolution of digital twins," said Naomi Aharony, managing director of Reboot Online in a statement.
"With the technology having the ability to cover the entire life cycle of a physical system, process or product, it provides businesses with a powerful analytical tool which can thoroughly assess key performance indicators and provide insights as to where enhancements can be made.
In the long-run, the lessons and suggestions taken from digital twins will drive various opportunities for innovation and growth," Aharony added.Outlook
Although many IIoT projects are still in the proof-of-concept or trial stage, there are clear signs of a widespread move towards full production deployments. The relevant technologies are available, investment decisions have been made, and returns on those investments are expected.
RECENT AND RELATED CONTENT
Ericsson extends IoT portfolio
Ericsson now has four different Internet of Things segments across Massive IoT, Critical IoT, Broadband IoT, and Industrial Automation IoT.
How IoT might transform four industries this year
Healthcare, manufacturing, automotive, and public sector set to see big changes.
GE's new industrial IoT software business: What it means for customers
With GE's new standalone software business, the company has given the clearest signal yet that running a successful software business is very different from running an industrial conglomerate.
Nokia wants to make your IoT project easier to set up and run
Four IoT packages aim to make enterprise deployments of sensors and analytics easier.
Manufacturing industry at higher risk of cyberattacks thanks to industrial IoT (TechRepublic)
Industrial IoT devices and Industry 4.0 initiatives are putting the manufacturing industry at higher risk of breaches and attacks.-
Francisco Gimeno - BC Analyst If you have never read about IIoT is because until now even talk of IoT was problematic, due to the difficulty to integrate everything in a complex network. However, technology is improving and expanding, and even industrial sector is trying to get itself into this IIoT. Another sign of the impending 4th IR, where we can possibly expect a lot of developments in the next future.
-
-
A new report highlights many of Europe’s AI startups appear to be cashing in on the hype and have no actual AI to speak of.
The fact you can add ‘AI’, ‘IoT’, or ‘blockchain’ to your company name/description and it will skyrocket your valuation has become something of a running joke in the industry. Shares in Long Island Iced Tea, for example, infamously shot up almost 200 percent after changing its parent company name to ‘Long Blockchain Corp.’
European AI startups have been similarly cashing in, according to research by London-based investment firm MMC Ventures.
MMC Ventures were unable to find any evidence of AI applications at 40 percent of 2,830 AI startups in Europe. Many, of course, do have plans to develop AI in the future.
‘Artificial intelligence’ has been used to define many things including the automation of tasks, machine learning algorithms, and complex neural networks. This has given businesses a broad scope to claim they’re using AI in some respect.
European venture capital groups pay attention when a startup claims to be using AI. Funding is between 15-50 percent higher than a typical software startup, according to MMC Ventures’ research.
However, the number of startups actually using AI is rapidly increasing. One in 12 are now using AI compared to one in 50 six years ago. 12 percent of large companies have started using AI in their business compared to just four percent the prior year.
The UK is the powerhouse of European AI, with a third of the continent’s startups. According to data from Capital IQ, European investors have doubled their UK investment over the past year.
Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.- By Admin
- 2 comments
- 1 like
- Like
- Share
-
Jakobo Gimeno I love how everyone is slapping the words AI onto their companies to get attention these days, I do not blame them AI is considered a major milestone when it comes to technology so claiming to have AI gives companies great exposer, from what I have seen a simple data collecting system is enough to be called AI. Even so investment is AI is interesting to see because a lot of companies in different fields of work are trying their best to get AI technology involved in their work. Last year I read a great article of how an AI system played the extremely difficult game called Dota 2 and after 9million test matches it mastered the game and was able to beat the best human team easily and it was the first time playing against actual humans. I can wait to see the impact AI will have once it becomes common in companies.
-
Francisco Gimeno - BC Analyst AI is already introducing itself around us growing exponentially in the next future. Thus, the hype and the self marketing of some companies using the AI label to get more recognised. Time will put everything in its place, like it is happening in the blockchain and crypto fields.
-
This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.
Author: Hilary Sutcliffe, Director, SocietyInside & Conrad von Kameke, Director, BioInnovators EuropeTo harness the power of new technologies for social good, the UN, governments, businesses and civil society groups are calling for new frameworks for global, national and local cooperation around technology development.
These may require a redesign of systems, new models, new skills and new mindsets to rise to the opportunities and challenges posed by the Fourth Industrial Revolution (4IR).
This will be tough for policymakers, businesses and for citizens – we humans don’t much like upheaval and change, particularly at the current speed and scale that seems unlikely to slow down.
So what should we consider when designing policy and governance to focus new technologies on the flourishing of people and the planet, without causing more problems than they solve?Remember – ‘tech is not like weather’
With the sheer pace of change, it is understandable that many feel technology development is out of control; like we’re strapped to a speeding train, trying desperately to figure out whether that’s a light at the end of the tunnel or another train coming to flatten us.
While technological advancement can be exciting, it can also be disempowering and unnerving, not least for policymakers, as tech appears to move faster than the capacity to steer or govern it. But the 2018 Nobel Prize-winning economist Paul Romer urges us to remember that, “Technology is not like weather, it doesn’t just happen to us.
” He reminds us that technology is fundamentally under our control and “if we collectively set our minds to improving technology, we can improve it in a direction that seems to be important to us and at a faster rate”.
While there have been some significant technology mis-steps in recent times, there are also many examples where we are indeed “collectively setting our minds” to creating positive change and basing policy and tech development on human values.
Check out the uses of AI4good, or great stories on TechForGood, or even join people in your area bringing the Sustainable Development Goals to life in SDGsInAction.Make values and ethics central to policy and tech development
The vision of the WEF Global Future Council on Values, Ethics and Innovation is “to make it normal to discuss values and ethics and bring them to life in policy, investment, business and governance”.
It might seem obvious that the well-being of people and the sustainability of the planet would be at the core of these critical areas, which control every aspect of our lives. What seems almost bizarre, is that it is currently seen as “normal” to deliberately avoid these issues in many areas of policy and business as irrelevant at best, or even as inhibiting, in pursuit of narrow, mostly economic goals.
Discussing perhaps doesn’t feel purposeful enough. But psychology tells us that making something a normal topic of discussion is a massive first step in creating change.
If policymakers, governance designers as well as technology developers, and even researchers, started more often by discussing how they can work more closely with society to understand the values and ethics underpinning their work, that would be hopeful. If they then took the practical steps to embed these values even more in their policies, governance instruments, research projects and products, it would be transformative.
Again, there are signs of real hope. Global debates about the values and ethics of data use, artificial intelligence, robotics and gene-editing are omnipresent. There is no shortage of discussion and debate about how values can better influence policy and governance, R&D and product development – though only time will tell how effectively this is connected to real action and behaviour change.Join the dots – integration essential
The recent World Economic Forum white paper Values, Ethics and Innovation Rethinking Technological Development in the Fourth Industrial Revolution highlights the importance of a systemic approach to policy. One which explores where and how values and ethics can be integrated into policy, from education to funding, product design to governance.
The report outlines a compelling vision. It illustrates that only through a genuine joined-up approach, with new collaborations and partnerships between policy, academia, business and civil society working across the entire innovation ecosystem, can we begin to really influence priorities and actions at every level to help design the 4IR from the bottom up, as safer, more equitable, and more sustainable than those that went before.The First Empowerment Revolution
One of the defining characteristics of the 4IR is that citizens are no longer compliant and trusting of those with power. Many companies and policymakers are seeing a transformation brought about by social media and mobile technologies, where many more are able to express themselves directly and powerfully to challenge the status quo.
Perhaps what we are seeing is not just a Fourth Industrial Revolution, but a First Empowerment Revolution?
Vocal citizens across the globe are demanding that their views are listened and responded to, as can be observed from the Arab Spring to current political developments in the US and Europe.
But the reach of this phenomenon goes well beyond these examples. It also occurs in a more focused, 4IR-related context: think of employee opposition such as that of Google’s employees’ against the proposed censored search engine in China, or Microsoft employees’ concerns about AI being used for the US military.
A new, empowered, vocal society may be quite unnerving, particularly to those who are used to having a sense of control. It could feel anarchic, chaotic, worrying, difficult to know how to respond.
Our work exploring the roots of trust and distrust in technologies and governance sought to understand how other intractable disputes, such as peace negotiations, are conducted successfully.
There is a rich seam of knowledge yet to be explored, but three key factors appear paramount in designing successful collaborations from a base of potential conflict:- Process matters; it is through collaboratively designed, trustworthy processes that we will best be able to navigate the potentially conflicting priorities, difficult trade-offs and clashes of values that will be an inevitable part of navigating the complexities of the 4IR.
2. The smallest glimmer of shared values and a shared mission, which offer a connection to the greater good as well as everyone’s self-interest can move things forward into a space that didn’t exist before.
The Sustainable Development Goals provide us with much more than a glimmer of hope; they offer us a go-to agenda that can help shape technology to be more inclusive, responsive and focused on social and environmental good.
3. Processes that are designed to embed respect for each other and earn the trust of diverse parties, with evidence of that respect being clear to all.
Process matters; it is through collaboratively designed, trustworthy processes that we will best be able to navigate the potentially conflicting priorities, difficult trade-offs and clashes of values that will be an inevitable part of navigating the complexities of the 4IR.
It may not be easy, but creating a world in which people and planet flourish like never before is most certainly worth our utmost effort.-
Francisco Gimeno - BC Analyst The already incoming tech revolution can produce a dystopia world if it doesn't lead to empowerment and is not anchored in human and ethical values. This is why is so important for all of us to get our hands on the collaborative design of a new society under the 4th IR.
-
The Fourth Industrial Revolution has arrived with the emergence of new technologies. Is your company ready to join in?
The Fourth Industrial Revolution (4IR) is building momentum and creating opportunities and challenges for businesses of virtually all types and sizes, particularly those engaged in leading a successful digital transition.
Emerging 4IR technologies, such as autonomous vehicles, VR/AR, AI, robotics, blockchain and IoT, are poised to simultaneously raise the stakes and create new opportunities across an array of business sectors.
The 4IR represents a crash of two worlds, according to Mohamed Kande, an advisory leader at professional services firm PwC. "It’s the collision of time and technology, the wall of physical assets colliding with a digital wall," he explained.
Mohamed Kande
The challenge now facing industries worldwide, Kande observed, is handling the emerging business models and evolving customer expectations created by new 4IR capabilities.
"With this in mind, a key issue for executives will not only be to understand how these technologies can benefit their businesses, but how to react to drastic market changes and divergence caused by the unforeseen technological innovation," he said.
The task aheadIT leaders are currently facing two high-level tasks: the need for business model innovation and identifying the next generation of challenges. "Simply put, typical product innovation will no longer be enough," Kande warned, noting that leaders will need to balance product innovation against business model innovation. "This puts a premium on identifying new ways to serve the market."
IT leaders can prepare themselves for 4IR technologies by embracing them in combination to solve various business issues. "If the technologies aren’t employed to solve a business problem, they aren’t generally going to help the organization keep pace with the rate of change," commented Scott Buchholz, national emerging technologies research director for business consulting firm Deloitte.
Scott Buchholz
Getting startedConnectivity and collaboration are the keys to preparing a business for new technologies, said Gregory Hayes, director of North America applications and consulting at EOS, an industrial additive manufacturing systems developer. "When considering implementing new technologies like AR, VR or Blockchain, no one company is going to be able to do it alone successfully," he observed.
"IT leaders can prepare by identifying and partnering with organizations that have pieces of the IT solution that they need to implement."The advanced technologies discussed here will be key themes of the new Emerging Tech track at Interop 2019, running May 20-23 in Las Vegas.
A common way to evaluate promising 4IR technologies is with pilots or proofs of concept projects. "Not every technology needs to be in production tomorrow," Buchholz advised. "This also gives staff a chance to adjust and learn the new technologies as they go."
Greg Hayes
Yet despite best efforts, many 4IR projects fail to make it past the proof of concept pilot stage. "This is not because the project won't deliver value," observed Stephan Biller, chief innovation officer and vice president, IBM Watson IoT.
"The problem is that leaders fail to plan in advance how to measure value and how to build the business cases to move the projects into production.
"Developing and nurturing a future-focused workforce is also essential for successful 4IR adoption.
"People not only provide execution, but also help to ensure seamless technological adoption within an organization — and leaders need both," Kande noted. "If you’re making the decision to invest in next-generation technology to reap the benefits of the 4IR, it's important that your people know how to properly use and interact with these technologies on a daily basis."
Stephan Biller
In the years ahead, virtually all types of businesses will need designers and engineers who can think in 4IR terms. "This may take shape in re-learning design methodologies, understanding and accounting for security concerns or other factors that require a learning curve to transform ways of working from Industry 3.0 to Industry 4.0," Hayes said.Investments in IT, operational technologies and HR should be synchronized to deliver the maximum possible RoI.
"Leaders need to ensure their digital investments match their investments in people," Kande explained. "If an organization only focuses on the technology, they are missing an opportunity to close critical skills gaps within their organization, which is a big reason why many digital transformations fail.
"Maximizing 4IR ROITo maximize ROI, IT and business leaders planning to in invest in 4IR technologies should consider taking a step by step approach toward adoption.
"Developing a strategy from the outset, realizing the total investment needed and calculating your potential ROI over time, are critical steps in maximizing return," Hayes said.
Jay Venkat
As projects move from proof of concept into the pilot stage and beyond, they need to be constantly monitored and tracked against defined KPIs, Biller stressed. "Defining the KPIs and tracking their progress gives you well defined results," he explained.
"Tracking the metrics also helps you obtain credibility and buy-in from your teams and business partners as you expand projects into the next stage with production rollouts.
"As business and IT budget boundaries continue blurring in the years ahead, CIOs and other business and IT leaders it will need to begin focusing on overall business value rather than counting budgets in functional silos, said Jay Venkat, senior partner and managing director of the Boston Consulting Group, a global management consulting firm.
He noted that IT leaders must also invest in themselves by keeping on top of emerging technologies and understanding how to harness their capabilities. "Doing so will position them for dialogues with business counterparts on curating the right set of tech," Venkat advised.
[For more on implementing today's emerging technologies and what they mean to business check out these recent articles.]
The Problem with AI Facial Recognition Prepping the Enterprise for the AI Apocalypse Digital Transformation Rx: Moving Health Care to the Cloud
John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic ...View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.-
Francisco Gimeno - BC Analyst Despite of the fact that 4th IR is foreseen as very disruptive, both in speed of change and tech leap, the truth is that human beings don't need to fear, but to prepare, produce strategies and work to be able to harness changes in society and economy. Are you preparing?
-
-
Turning Point: A World Bank report concludes that more than 143 million people will become “climate migrants” escaping crop failure, water scarcity and sea-level rise.When culture and recreation come together communities emerge.
When communities become societies a settlement is formed.
In those realities we inhabit our aspirations of togetherness.Sustainable cities are like a forest: ever-growing and diverse. In a forest, each branch, each trunk, each tree is unique, blossoming in its own way.
Yet everything is connected. Everything in the forest has its role in a cosmic symphony. The city is no different.The city, too, is an organism, both stable and fluid, static and constantly transforming. Humans are a part of the city’s inner mechanism, just as our cells are a part of us. Streets act as veins, connecting us to a network of life similar to a bio-diverse forest.
So why do we not see our cities, our towns, our hamlets as biotechnological entities? Why do we do not plan and build them in natural ways that reignite the spirit of community, the spirit of a positive participatory culture?Consider Jaipur, where Maharajah Sawai Jai Singh II ruled in 18th-century India. He envisioned the city as a paradise on earth.
Taking into account the constantly changing climate, as well as the movement of the sun, Mr. Singh created a city built around guilds and clusters of sustainable, cooperative housing. As Jaipur cultivated the body, the mind and the spirit, it thrived socially, economically and culturally.
Jaipur recalls the ancient vastu purusha mandala — a philosophy of design that aims to create a balanced and healthy environment. This ancient science shaped most of India’s traditional settlements, where seasonal activities such as festivals and fairs take place. The mandala adapts to totally different climates and places, and, in turn, inspires them.
Unfortunately, we have since forgotten this soulful approach to architecture and design, following instead the prevailing planning model of big budgets, large-scale structures and isolated behaviors. Consequently, our habitations have become fragmented and we fail to see the city’s infrastructure and life in an integrated way.
The Palace of Winds in Jaipur, an Indian city that thrived in the 18th century.CreditVijay Mathur/Reuters
Image
The Palace of Winds in Jaipur, an Indian city that thrived in the 18th century.CreditVijay Mathur/Reuters
Instead of building more megastructures — which constantly consume time, energy, and human and natural resources — should we not follow a more natural, biological approach to architecture that would foster small but comprehensive clusters of settlements and perhaps create a new world?
These smaller settlements would be sustainable and replicable. They would be full of energy and vitality, but they would not grow beyond a certain size. They would possess the same virtues as a bio-diverse network.Such settlements would not waste time or energy or natural resources. The inhabitants would have global skills and a suitable, fulfilling lifestyle.
This, as a result, could help salvage our planet from the present disasters and disparities that spawn anxiety and doubt about the future. Often while visiting ancient towns and cities, which are socially, economically and culturally well-knit, we are struck by a strange, unexpected silence and slowness.
Our desire to push, to achieve, to conquer dwindles, and we think more of how nature connects us and how we can share and revere our intrinsic selves.In addition to such quietude, other aesthetic measures of settlements include grace, love, compassion and humility.
To animate a settlement one must create humble and tender connections, which encourage humans to come together and to share and to feel themselves a part of a larger order, a part of Mother Earth.In ancient Indian texts, the sthapati (the architect or planner) has to be aware of the sustainable cycles of nature, following the laws of time and energy, just as our ecosystem does.
The sthapati is obliged to integrate this natural flow with the lives of a settlement’s inhabitants. This method of interdependent planning allows for cultural activities and social integration. This form of sustainable architecture gives all individuals, regardless of class or creed, the ability to connect with their true natures.
Balkrishna Doshi
Isn’t this why some Japanese homes have a small bonsai tree to remind them of their connection to the eternal mystery of existence?Today, though we are globally connected, we are lost spiritually.
Prana — the subtle energy that can only be felt — is the missing link that, if ignited, could enliven the spirit of the community once again.Can we not apply these planning philosophies in the present to create a lasting environment of positive participatory culture?
Watch Oscar-nominated short documentaries from around the world made for you.
Balkrishna Doshi is the Pritzker Prize laureate for 2018, the first Indian winner of the architecture world’s most prestigious award. This is an article from Turning Points, a magazine that explores what critical moments from this year might mean for the year ahead.
Follow The New York Times Opinion section on Facebook and Twitter, and sign up for the Opinion Today newsletter.-
Francisco Gimeno - BC Analyst Reading reports on the future of cities, we can find how some cities will be 70 or 80 million by 2050. Is that really sustainable? Do we even want to live in those monstrous places? How can we take the opportunity that 4th IR is giving us to change the way we live and work, striving for sustainable cities, in commonality with our biological needs and connections to the ecosphere? We feel tokenisation and disruption of work and life brought by it will be a first and huge step.
-
-
Technology develops faster than ever these days. As it develops in new and unexpected ways, we're left to wonder the implications of what it can mean for our everyday lives and how it will change them.
Some have argued that the technological advancements of today place society in an industrial revolution, the fourth since the late 18th century. What would separate this fourth industrial revolution, though, is the direct relation it has to the preceding one - the "digital revolution" - before it.
So, what is this fourth industrial revolution? What technology is a part of it and what is its impact?Fourth Industrial Revolution
The First Industrial Revolution is famous for industrializing agricultural work with advances like the cotton gin and steam engine. The Second Industrial Revolution, in the late 19th and early 20th century, brought iron and steel into industry. The Third Industrial Revolution, the Digital Revolution, is the age of the computer and the internet.
That is an age we still live in, but a supposed Fourth Industrial Revolution seeks to develop digital technology further. It takes the digitization of our society and seeks to explore new uses for it while still advancing the technology. From smart technology to cyber-physical systems, this age of technology seems to be more about pushing the limits of what digital technology can do throughout any possible field.
This is another thing it shares with the Digital Revolution; while the First Industrial Revolution was primarily for agriculture, both the Third and Fourth Industrial Revolutions are marked by technological advancements that can affect all industries in astonishing ways.Who Coined the Term?
Certainly it wasn't difficult to call it the Fourth Industrial Revolution, considering there were three before it. So who decided that we had officially entered this new era?
The person who coined it in its most relevant sense was Professor Klaus Schwab, the founder and executive chairman of the World Economic Forum, using it as the title of a 2016 book. Due to the high-tech nature of this age, it has also been referred to as Industry 4.0, or I4.0.The Technology of the Revolution
We're being told we're entering a Fourth Industrial Revolution, but the technology we use the most - our computers, our phones, our video game consoles - are part of the Digital Revolution. What burgeoning technology of Industry 4.0 is on the horizon, and what is already here?
Some of the more fascinating advancements that have already begun are virtual reality and artificial intelligence, or VR and AI. In addition to the VR technology now available in video games, most notable with Oculus Go, it is now available for live television. BBC made it possible to watch the 2018 World Cup via virtual reality.
A VR headset wasn't a necessary purchase, as it worked through smartphones and tablets as well.AI, meanwhile, has gone from a crazy futuristic thing in movies to something that autocompletes sentences and responds when you ask for Siri on your smartphone. It has been casually integrated into your everyday technology, and will be integral to self-driving cars, a technological advancement that people have anticipated for years.
It's an interesting quirk to contemporary society that AI went from something many feared to something that corporations gave a cute human name to so you'd feel comfortable asking it to help you shop.
In the Fourth Industrial Revolution, we've applied the internet to our outside lives. Order a package from Amazon (AMZN - Get Report) or a pizza from Domino's? (DPZ - Get Report) Now you can stare at your computer or phone and track it from start to finish.
The rapid advancements of technology's capabilities are affecting our warehouses, farms and hospitals as robotics gets more and more impressive. It has, though, led to ethical questions about automation.Potential Impacts of the Fourth Industrial Revolution
This industrial revolution brings potential for incredible change worldwide - both good and bad.
In many ways, these advances can bring incredible innovations. Information is more convenient to find, and new strides are being made in productivity and efficiency in a number of industries. Communication is faster and more readily available around the world.
Some have high hopes of using the technology from this era to improve the world in ways like limiting carbon emissions. There are lots of ways to look at the way technology is advancing and teeming with potential to better our earth.
However, we would be wearing some awfully rose-tinted glasses by saying that is all it can do. There is tremendous risk in all of this. There is fear that continued automation in so many industries could lead to millions finding themselves out of work. This is also arguably true about aspects of Industry 4.0 that "disrupt" other industries, which can lead to less consistent work and lower wages.
With all the data used to enhance and personalize AI, we have also become a far less private society. This data can include personal information, and data breaches in large companies are more common than anyone would prefer.Bioengineering and biotechnology are impressive on a scientific level, but how are they being used?
Biotechnology can be of great use in improving prosthetic limbs, but the potential for genetic modification brings up an ethical dilemma. Bioengineering and robotics can cause great strides in the medical field among other industries, but they can also be used to destructive ends as well.
This is the confusing and fascinating element of being in the beginning of a new technological revolution. So much is evolving so fast, and so many people have different intended uses for these innovations. In his book, Schwab calls on everyone, those in charge of these new technologies in particular, to think critically about the ends to which they can be used, and how they can benefit the people.-
Francisco Gimeno - BC Analyst The 4th IR is coming! Like in the famous Game of Thrones' books (Winter is coming!) is both a promise of change and of huge disruption. We are for it, even with the inherent risks of any revolution. 4th IR should mean people's empowerment and technology for the good of humankind, not for the digital elite only. Let's work for it.
-
-
Blockchain technology is so synonymous with cryptocurrencies, and especially Bitcoin that it is almost like the financial sector has usurped its potential. In times like these, where an investing bear market has befallen the cryptocurrency space, it is easy to get down on the revolutionary possibilities of blockchain technology.
Since the original Blockchain, that is Bitcoin, emerged, there has been a considerable focus on transactional blockchains which have been at the forefront of the mainstream understanding of the technology. Bitcoin is often the layman’s first point of call with the stories of investing success stories obscuring the view of other possibilities.
However, blockchain technology is moving along in an undercurrent separate from the comings and goings of the cryptocurrency market and the financial interest it has garnered in just a few short years.
Ethereum and smart contracts have taken blockchain technology to a second generation where many different sectors are in the sights of its potential disruption. Even a third generation is being bandied about, with regards to Directed Acylic Graphs, but the fourth generation - which will be an essential part of the fourth industrial revolution - will need the help of some similar revolutionary technology.
Artificial Intelligence (AI) has been cutting a distinct but similar path through the nascent stage of technology development. Its uses and adoption have been growing, and its implementation has reached a critical point.
Both AI and blockchain are in situations where they can benefit from each other, and help one another, regarding reaching their next step on the road to this fourth industrial revolution. There is already evidence of this embryonic partnership beginning to blossom with many sectors feeling their twin disruptive powers.
The trifecta: Big data, AI, BlockchainWhile blockchain and AI have been forging their paths with little overlap in their own past 10 years of existence, there is an clear link between the technologies in the form of data. Big Data’s emergence and importance recently have catalysed the relationship between blockchain and AI.
The biggest reason for this AI revolution is the advancements in Big Data. These recent developments have allowed businesses to organise a large amount of data into structured components which can be processed by computers very quickly.
At the same time, this importance of data has fuelled blockchain’s advancement as its distributed ledger is a new and novel way for data to be stored in an alternative and effective manner.
To this end, the need for data analytics with AI is growing, and the combination of AI and blockchain is part of the reason for the onset of the Fourth Industrial Revolution.
With both these technologies able to effect and enact upon data in different ways, their coming together makes sense, and it can take the exploitation of data to new levels.
At the same time, the integration of machine learning and AI into blockchain, and vice versa, can enhance blockchain’s underlying architecture and boost AI’s potential.
The offerings of blockchain, such as its security and immutability due to its cryptographic nature, makes it ideal for storing the highly sensitive, personal data which, when smartly processed, can unlock so much value and convenience in our lives.
One of the most significant sectors that are trying to unlock this protection and processing of sensitive data is the healthcare sector. Additionally, blockchain can also make AI more coherent and understandable, and we can trace and determine why decisions are made in machine learning. Blockchain and its ledger can record all data and variables that go through a decision made under machine learning.
More so, AI can boost blockchain efficiency far better than humans, or even standard computing can. A look at the way in which blockchains are currently run on standard computers proves this with a lot of processing power needed to perform even basic tasks - such as hashing.
Hashing power is a hammer and tongs approach, whereas a smart combination of AI and blockchain would see far more efficient code-breaking.
What the Fourth Revolution will look likeIt is clear that the blockchain can tackle many inefficiencies in our technology as it stands, and AI similarly so because of the importance that is put on data currently.
However, if it is to drive the fourth revolution, there needs to be an integration of the two technologies which will then have the potential to change some very steady and familiar technologies we take for granted today.
Paul Lee, CEO of Mind AI, understands the importance of blockchain technology merging with AI and how it can shape our upcoming future.“One big, immediate way that blockchain can help AI is through its ability to decentralise the ownership and sharing of data,” he explains.
“Currently, large corporations with massive amounts of data - like Google, Facebook, Baidu, Tencent - can own their stockpile of user data to create and continuously improve AI.
“Blockchain has the potential to break down today's data oligopoly through individual ownership and control of data.
This will open access to vast numbers of data sources that can be used by AI developers that previously didn't have access to such data. “This means that there will also be a data marketplace, which can be a free market for developers looking for specific types of data for their projects. In the same sense, AI can boost blockchain, as some of the most compelling use-cases and early-stage applications of blockchain technology is for AI.
This will help pave the way for more progress in blockchain technology to better mesh with AI, and this progress and applications will inevitably have a spillover effect into other domains, helping boost the rate of blockchain adoption.
”Yoav Vilner, the former CEO of startup marketing company Ranky, adds on to Lee’s thoughts about how data will be redistributed in the fourth industrial revolution.“Making private data secure again will invariably lead to it being sold, resulting in data markets, model markets, and with the introduction of intelligence systems, maybe even AI markets.
The markets will have easy, secure data sharing that will help smaller players enter the fray,” Vilner explains.The examples are quite endless, but from de-powering the technology oligarchies, the combinations of these two technologies can also rewire the entire cybersecurity environment, as Karin Flieswasser explains:
“When AI and blockchain are put together, they provide a double shield against cyber attacks. Machine learning algorithms can be trained to automate real-time threat detection and to continuously learn about the behaviour of attackers, thereby thickening the malware detection armour.
Meanwhile, decentralised blockchains dismantle the inherent vulnerability of centralised databases, requiring cyber attackers to challenge not one but several entrance gates.
”Coming togetherBoth blockchain and AI are continually coming up with new ways in which to advance our technological lives; however, there is no denying they are both very much in the nascent stage. In saying that, the combination of the two as a dual technology to tackle today's problems for the fourth industrial revolution has still got a long way to go, but their intertwining and actioning is moving forward.
“AI and blockchain technology will intertwine more intensely and frequently in the future, whether or not the AI project/company is a centralised business or an open source project.
We already see multiple great projects around distributed computing for AI applications as well as various AI marketplaces, all utilising various blockchain technology,” Lee explains.“Of course, other AI projects might not need any technological benefits offered by blockchain at all, and that is also totally okay.
Though there will be projects and businesses in both sides of the spectrum, my prediction is that blockchain and AI will come together more intensely and frequently, as we will need more transparency, security, and decentralisation in the AI space.
"A whole new worldThe fourth industrial revolution will be the first major revolution that has gone from a technology-focused state to another, more advanced one. For that reason, it is not surprising that it could well be predicated on technologies that are currently profoundly misunderstood, and often seen as too advanced.
Blockchain and AI have massive potential, but their actual effect will only be seen a long way down the line, and they will also only be effective when tied together as they both boost each other up, and expand on their different capabilities.
I am an award-winning journalist that has covered a variety of topics from finance to economics, technology, and even sport. With the emergence of Blockchain technology and the rise in popularity of cryptocurrencies I have focused my efforts towards this fascinating and impo... MORE-
Francisco Gimeno - BC Analyst A revolution is coming. New technologies which can help the whole humankind to give an exponential leap into a new era where, if properly done, everybody can benefit from it. This is not an utopia, and if misused could even lead to dystopia where elites would use them to get richer at the expense of the masses. But we are optimistic that the awareness is spreading in this digital era and the society will be ready to go over any hurdle on the way.
-
-
Certain skills will be required to succeed in the Fourth Industrial Revolution. Here are four recommendations to help bridge the youth skills gap.by Jamira Burley
The rapid march of emerging technologies has ushered in the Fourth Industrial Revolution and along with it concern from many in business, government and academia about the impact on today’s workers, not to mention the workforce of the future.
By 2030, an estimated 1.8 billion youth worldwide will not have the skills or qualifications required to participate in the workforce, according to predictions in a new report by Deloitte Global and the Global Business Coalition for Education (GBC-Education).
“Business has to play a leading role by not only defining and communicating what skills are needed in the future, but also by working side by side with educators, governments and nonprofits to ensure our future employees are receiving the education necessary to compete and succeed,” said Deloitte Global Chairman David Cruickshank.
Four Skills Essential for Success
Titled “Preparing tomorrow’s workforce for the Fourth Industrial Revolution,” the new report found that four skills emerge when looking at what will be required for individuals to succeed in 4IR:- Workforce readiness: Basic skills such as time management, personal presentation and attendance are critical.
- Soft skills: As humans increasingly work alongside robots, uniquely human skills, such as creativity, complex problem solving, emotional intelligence and critical thinking, will be irreplaceable by machines.
- Technical skills: New employment opportunities are being created through technology. Jobs that are currently going unfilled often require industry-specific technical skills and targeted training.
- Entrepreneurship: As the gig economy grows, youths’ ability to be innovative, creative and take initiative to launch new ventures will be critical.
Financial investment alone will not employ 1.8 billion youth. Instead, new system-wide approaches are needed.
Businesses currently make trade-offs between scale and impact, but this research suggests ways to achieve both. It is critical to overcome the challenges of reaching the most marginalized youth, including women and girls who in many parts of the world already face significantly higher rates of unemployment.
Four Recommendations to Bridge the Skills GapWithin this landscape, following are four key recommendations to address the youth skills gap.
First, align stakeholders’ objectives and approaches. In order to achieve scalable results, businesses need to work with the broader ecosystem, implementing an integrated approach that leverages each group’s strengths and capabilities for impact.
This includes coordinating opportunities, identifying gaps in training, finding opportunities for co-investment and sharing information about future talent needs.
Second, engage in public policy. Business has an opportunity — and a responsibility — to help governments prepare policies, rules and regulations that will benefit youth and strengthen our future workforce.
Dialogue, advocacy, collaboration and influencing government are key means to drive results.Third, develop strong talent strategies. Reviewing and adapting current talent strategies will be important to future success, and developing best practices that promote inclusivity and innovation will be critical.
Last, invest in workforce skilling. Employee training can no longer be a “check the box” activity, and businesses need to evaluate, invest and promote workforce training programs strategically so future talent needs and requirements can be met.
GBC-Education will take the recommendations forward through its Youth Skills and Innovation Initiativeby establishing an Action Hub, which will share information about programs that are working in the hopes that they can be scaled or easily duplicated.
At the heart of the issue is quality education and training, but there is now a framework for how to address the youth skills gap. Equally important, there’s a broad commitment across stakeholder groups and unlikely allies, led in large part by youth themselves, to bridge that gap.
Jamira Burley is the head of Youth Engagement and Skills for the Global Business Coalition for Education. She is a White House Champion of Change and Forbes 30 Under 30 honoree. To comment, email [email protected].-
Francisco Gimeno - BC Analyst Preparing the youth for the new 4th IR is crucial. We agree with the recommendations from this article. We add also that unless governments and civil society doesn't act accordingly soon, the transition to the new digital economy, and future of work will be more painful. It is a must, however, for anyone involved to work in his/her own personal ongoing training looking at the already present future, and not wait for external actors to impose this.
-
World’s most famous footballer will be involved in the launch of the Finney blockchain smartphone
BC SportBC Cryptocurrency
Jonathan Symcox
Finney phone is named after cryptocurrency pioneer Hal FinneyLionel Messi is backing the launch of the world’s first blockchain smartphone.
The Barcelona star is a brand ambassador for Israeli start-up Sirin Labs, which is launching the $1,000 Finney phone in Barcelona on November 29th.Its previous device was the $16,000 Solarin smartphone, launched in 2016 and aimed at high net-worth individuals who wanted “military grade security”.
The Finney phone, named after cryptocurrency pioneer Hal Finney, will look to compete with premium consumer smartphones from the likes of Apple and Samsung.It has an inbuilt cryptocurrency wallet viewed on a screen which slides out.
Sirin said: “After a significant amount of time discussing a wide range of issues, Messi saw the power, professionalism and future within Sirin Labs, and it was for these reasons he agreed to represent us to the masses.
”Acknowledging that sometimes apparent celebrity endorsements in the crypto space can lead to scams, a blog post continued:
“If you’ve followed Messi’s career at all, you’d know he doesn’t just put his name on anything to make a buck.
“As the greatest of all time in the most popular sport in the world, you can imagine the type of vetting process that he and his team have to ensure they’re not going to associate with something criminal, illicit, or anything that might tarnish his stellar brand.”-
Francisco Gimeno - BC Analyst Good marketing on the side of Finney! Messi is a household name everywhere in the world. But, what is exactly a "blockchain telephone"? A telephone with a crypto wallet? Mmmmhhh.... Let's wait for the launching date to get better input.
-
-
Klaus Schwab, author of The Fourth Industrial Revolution. Photo: XinhuaJOHANNESBURG – South Africa's economic development will ride in the cockpit of the Fourth Industrial Revolution.My assertion is based on the forecasts done by Klaus Schwab in his latest book, The Fourth Industrial Revolution.
According to this book, Africa will benefit immensely from the ageing declining populations in Europe, North and South America, the Caribbean, Asia (including China), southern India, and some Middle East countries.
This view is supported by the report published in 2011 by the African Development Bank (ADB) entitled “Africa in 50 Years’ Time”.
According to this report – Africa is the only region where there will be about 1.87 billion people of working age in about 50 years’ time.RELATED ARTICLES
OPINION: The fourth industrial revolution: Be prepared
WEF founder and chairman to address researchers conference in Pretoria
On the other hand, Africa will have more than 3billion people by 2050. This means that around 74percent of the African population will be of working age.
Other than an increment of nationalism sentiments across Europe and the US, the Europeans are tightening their migration regulations and that is why instead of importing skilled labour, they move their firms to the countries that have such labour.In the past century, East Asia and South America were the best beneficiary of this kind of direct investment.
However, due to Asia's ageing populations, investors will move their manufacturing plants to Africa, where there will be abundant labour and consumers of produced products.
According to the ADB, there is a huge decline in Africa's child mortality rate and deaths caused by HIV/Aids related diseases. This is a significant factor in Africa's population growth.Another crucial factor in favour of Africa's massive economic growth is the fact that the continent possesses half the world's arable land.
This will lead to massive agricultural investments and Africa's food production will feed the whole world.This view is supported by the World Bank - which predicted that Africa's agriculture and agribusiness markets are destined to top $1trillion (R14.39trillion) in 2030.
According to Professor Calestous Juma of Harvard Kennedy School of Government, three technologies will be deployed to boast agricultural output in Africa; these include Geographical Information Systems, nanotechnology, biotechnology and mobile-technology.
Four technological (industrial revolution) megatrends which will play a prominent role in driving economic development in the near future are: autonomous vehicles, 3D printing, advanced robotics, and new materials. Africa will be the biggest beneficiary of these technologies.
Due to the shortage of infrastructure in the form of roads, rail, border posts, airports, seaports etc, it is cheaper for Nigeria to import food from Peru instead of Cameroon.
Due to the fact that multinationals will mainly be operating in Africa, they will work with the African governments to build infrastructure that services their operations and transports their goods. In other words, infrastructure will be built through public-private partnerships.
It will be in the best interests of the investors to participate in the construction of the infrastructure.In some instances, the public (consumers) will also have to pay for this, in the form of taxes and payments, such as tolls.
Currently, due to the lack of infrastructure, trade among African countries is limited. By 2050, intra-African trade will increase substantially thanks to the availability of regional connectivity.
The availability of multinationals and infrastructure in Africa will inevitably lead to free labour movement. Something good about labour movement is that it will increase the flow of remittances across the African countries.
The incremental growth of populations, industrial production, agricultural activities and mining will require huge quantities of water.In certain parts of Africa, there is a lot of water in the ground and technology will be employed to extract such water.
The Fourth Industrial Revolution’s mega-technology will also be used to harvest rainwater.
Africa is surrounded by two oceans, the Atlantic and the Indian.Mega-technology will also be employed to extract water from these oceans and make it consumable.
Moreover, technology will play a critical role in the recycling of water.As a matter of fact, most production activities in manufacturing plants will be done with less water. Technology will play a critical role in promoting intra-continental trading and the supply of water.
Although the South African population remains stagnant and will not grow rapidly, South Africa can become the biggest beneficiary of this African growth.That is why South Africa should cultivate better relationships with other African countries.
Among other things, we should stop being xenophobic, and treating fellow Africans with arrogance and a condescending attitude.In the absence of huge population like other African countries, South Africa's strength will be to continue to serve as African gateway to the African continent and regional financial hub.
Rabelan Dagada is Professor of Practice in Digital Commerce at the University of Johannesburg's Postgraduate School of Engineering Management. He is on Twitter: @Rabelani_Dagada
The views expressed here are not necessarily those of Independent Media.
-
Francisco Gimeno - BC Analyst Optimistic statements about African success due to the 4th IR. South Africa being probably the country where this happens first, the rosy future won't happen automatically. It needs a lot of awareness, preparations, work on policies conducive to opening minds in education, finance, etc. We can say this, however: Africa can't waste this opportunity, maybe its last, to develop.
-
-
It’s the end of spring and millions of young Americans have put on caps and gowns, collected their college degrees, and are out looking for jobs. Many are aiming for the companies they consider the coolest, behemoths like Apple and Google and Facebook. I’d like to offer one piece of advice: If you want to know what it feels like to make a real difference, go anywhere but Silicon Valley.
Ten years ago, I was a recent grad myself, a tech dreamer with few prospects. Some of my friends went west to find their fortune; I stayed in Connecticut and bootstrapped a company out of my father’s basement. When it took off, I was summoned by the Valley’s oligarchs to meetings designed to impress me.
I’ll never forget one in particular: Sitting in a slick office and talking about million-dollar deals, I looked out the window and saw a homeless man in the street below sorting through garbage. This is the kind of radical gap between rich and poor you rarely see so conspicuously elsewhere in America, and, sadly, Silicon Valley does little to make it smaller.
In large part, it’s because the Valley’s main goal isn’t to make the world a better place; it’s to make investors wealthier. It is why an industry that started out as a vibrant and competitive market is now controlled by a few companies that treat people and ideas as just more lines on a spreadsheet.
And that, dear grads, is where you come in. Let me tell you something: America doesn’t need another blockchain startup or another app disrupting another industry — or whatever the latest tech trend in the Valley happens to be. It needs young people who understand two things: What you do matters and where you do it matters too.
Austin McChord built a tech unicorn in Connecticut — way outside of Silicon Valley. (Photo: Rebecca Greenfield)
The real world, as I’m sure you have noticed, has very real problems, and fixing them is up to nobody but you. Instead of shuffling off to some tech company’s campus to have your dry cleaning taken care of and your snacks provided, and your creative output consumed by some mammoth company, try asking the seminal question that every great entrepreneur — and every good person — should ask: What do people need? Muhga Eltigani did.
Born in Sudan, she arrived in America when she was five years old. She graduated from the University of Pennsylvania and had her pick of plum positions. Instead of heading to the Valley, she settled in Cleveland, courtesy of Venture for America, an impressive fellowship that sends recent college graduates to communities that are busy reinventing their economies.
There she worked on a healthy snack company that sought to address the obesity epidemic. Soon she launched her own company, NaturAll Club, which uses avocado oil to create hair products that met the specific needs of African-American women.
The company doubled in size last year, with offices opening in midsized towns across America. This is what real innovation looks like, benefiting both employees and consumers in communities not traditionally served by the bottom-line-driven tech industry.Which brings me to my second, and closely related, piece of advice: Where you do what you do matters.
A lot. Instead of heading to Cupertino or Menlo Park, consider Des Moines or Detroit or Durham, the maligned but absolutely necessary “second- and third-tier” American cities that were once the backbone of our economy.Go to these proud places, and you’ll find three things that are crucial to success, no matter what it is that you choose to do.
The first is simple — your buck goes much further in real America than it does in the tony and overpriced towns of the Valley.The second is even more crucial: Settle down in a “second-tier city” and you’ll find a very helpful community — from the local government on down — eager to see you thrive.
And finally, rather than surrounding yourself with talented people who are too often only interested in padding their resumes before hopping to the next opportunity, you’ll find communities of equally talented men and women who aspire to have meaningful careers without leaving their hometowns.
Forget about going west to work for companies that build $700 juicers while turning a blind eye to people going hungry down the road. And don’t worry about missing out: Google and Apple and Facebook will always be hiring.
But if you want to change the world, do real work in real towns and you’ll soon know what real success feels like.
Austin McChord is the founder and CEO of Datto. He spoke about his experience outside of Silicon Valley at Techonomy NYC. See the video here.- By Admin
- 2 comments
- 3 likes
- Like
- Share
-
Francisco Gimeno - BC Analyst Interesting point of view. SV has become a behemoth and with the new 4th IR opportunities we can do things differently. We have to, as the world needs a deep change and the 4th IR will give us the needed process of change, eizing the chances that blockchain is giving us.
-
By Mary Kan D'Andrea | November 8, 2018, 2:28 PM | Techonomy Exclusive Andela co-founder Christina Sass will be speaking at Techonomy 2018 this Tuesday. Tune in to our homepage to see her live from our stage.“Brilliance is evenly distributed,” said Andela co-founder Christina Sass onstage at Techonomy NYC in May 2018.
She was talking about people. Her company aims to be the answer to a software development and programming talent shortage, widening the search so employers can find them in new places. In the process, Andela is creating economic opportunity in developing countries.
Tolu Komolafe, one of Andela’s most senior developers, and co-founder of the Ladies in Tech organization, at work in EPIC Tower in Lagos, Nigeria. (Photo courtesy of Andela)
According to Code.org and statistics from The Conference Board, there are more than 544,000 open computing jobs in the United States, more positions than the nation’s universities and colleges can hope to fill with recent graduates.
Andela’s response is to identify talented young people in Africa, train them in software development, and place them in jobs at companies around the world without requiring them to move.
Andela offers a window into a promising possible future for work: A distributed workforce that is more diverse and creates economic opportunity where there was little before.
Founded in 2014 and venture funded, Andela serves as a recruiter, filling open developer roles at partner companies. But it does so by turning to the largely untapped talent pool of Africa, home to some of the world’s fastest-growing internet-savvy populations as well as sophisticated tech enclaves. Using tests and boot camps, the company selects coders and programmers and then trains them for six months.
These young coders often have educational backgrounds in computer science, though they generally lack the practical experience needed to turn their studies into a career. But with Andela, they don’t get your usual workplace training experience.
On top of receiving a computer, salary, and professional training, the package includes subsidized housing and regular meals. The company’s budding developers are then contracted out to companies across the globe, working remotely.
At times the developers head to lengthy, on-site visits at their contract companies in the United States, Europe, and elsewhere, building work relationships and solidifying ties. Andela serves as the employer of record but assigns each worker full-time to the client. Some now have already worked for their companies for more than two years.
Since July 2016, Andela has partnered with The Zebra, a car insurance comparison site, which has brought 13 engineers onto their team in Austin, Texas. “In addition to [their] technical contributions, they’ve also brought an energy that is infectious,” Meetesh Karia, CTO of The Zebra says of the Andela engineers. “They’ve become a core part of our team.
”Andela has attracted $81 million in funding from investors including South African-based venture capital firm CRE Venture Capital, the Chan Zuckerberg Initiative, and Spark Capital, among others. And the company is swimming in qualified applicants, enabling it to hire only the most talented coders.
It now has more than 1,200 employees, many based in African urban hubs, including Lagos, Nigeria; Nairobi, Kenya; and Kampala, Uganda, with more to come.
Andela’s Kigali, Rwanda office is slated to open in January 2019. This company’s aspirations go way beyond its own profits. Andela hopes that the jump-start it gives trainees will not only give them work experience but inspire them to found local or global startups of their own.-
Francisco Gimeno - BC Analyst New, interesting and exciting news come from Africa everyday, far from the usual negative ones. The world is starting to recognise the importance of a more skilled young African population which can be key for the 4th IR development in the continent. We congratulate this type of initiatives.
-
-
Amazon recently abandoned a recruiting tool that used artificial intelligence to rate candidates. The tool studied the resumes of previously hired engineers to create algorithmic associations with certain schools, experiences, and key words that were supposedly indicators of work success.
But the company belatedly recognized that the tool was, in fact, teaching itself that men were preferable candidates to women.Chalk it up to the unintended consequences of technological advancement.
When Facebook’s ad targeting system was used to spread misinformation, the unrest it created around the world was another unintended consequence. When the company tried to fix the problem by clamping down on political advertising, it created even more unintended consequences by blocking dozens of non-political, LGBT-related ads.
And now, despite a rash of hate speech spread on Facebook that incited ethnic violence in Myanmar and Sri Lanka, and propaganda in the Philippines manipulated by its violent dictator, Facebook is accelerating its plans to expand Wi-Fi access in India, Indonesia, Kenya, Nigeria, and Tanzania.
Unintended consequences, anyone?As technology races ahead of society’s ability to absorb its impact at work, at home, and in world affairs, unintended consequences are becoming more ubiquitous than e-scooters. The so-called unintended consequences of social media have left consumers vulnerable to misuse of their data and resulted in just about every part of the world being plagued by misinformation and disinformation.
LinkedIn CEO Jeff Weiner put it in perspective during a recent talk: “It’s far less about the technology these days, and far more about the implications of technology on society,” he said. “We need to proactively ask ourselves about the potential unintended consequences of these technologies.”Unintended consequences are one thing, predictable outcomes are another. Let’s not be glib about root causes.
Let’s not conflate errors, poor judgement, and shortsightedness with unintended consequences in order to shirk responsibility. And let’s not ignore the lessons of today’s unintended consequences. They are a warning about the potential impact of the next set of emerging technologies.
AI will soon be upon us; 5G promises to accelerate the speed of consequences, both good and bad. The combination is poised to change everything — our companies, our products, our transportation, and even our governments.
Kai-Fu Lee
Two of the world’s leading AI experts will be at Techonomy 2018 next month for a conversation about how AI will change innovation, policy, the nature of companies and work, and even national competition. Kai-Fu Lee, alumni of Apple, Silicon Graphics, Microsoft, and Google, is based in Beijing and is the author of a new book that explores the global fight for AI dominance.
Paul Daugherty is chief technology officer at Accenture and a longtime student of how AI is transforming business. He is bullish, writing here for Techonomy that “artificial intelligence could double annual economic growth rates of many developed countries by 2035, transforming work and fostering a new relationship between humans and machines.
”Given its transformational power and the global stakes, now is the time to assign responsibility for AI’s thoughtful development and deployment. A recent paper from EY contends that AI and machine learning are outpacing our ability to oversee their use, and points out that it’s risky to use AI without a well-thought-out governance and ethical framework.
The paper suggests four conditions that should be considered before starting an AI initiative:- Ethics: An AI system must comply with ethical and social norms. Ethics must inform how people design, develop and operate the AI, as well as how the AI behaves. This approach, more than any other, introduces considerations that have historically not been mainstream for the development of technology, including moral behavior, respect, fairness, avoidance of bias, and transparency.
- Social Responsibility: The potential societal impact of the AI system should be carefully considered, including its impact on the financial, physical and mental well-being of humans and our natural environment. Potential impacts include workforce disruption, skills retraining, discrimination and environmental effects.
- Accountability and Explainability: The AI system should have a clear line of accountability to an individual. Also, the AI operator should be able to explain the AI system’s decision framework and how it works. This is more than simply being transparent; this is about demonstrating a clear grasp of how AI will use and interpret data, what decisions it will make, how it may evolve, and the consistency of its decisions across subgroups.
- Reliability: This involves testing the functionality and decision framework of the AI system to detect unintended outcomes, system degradation, or operational shifts — not just during the initial training or modelling but also throughout its ongoing “learning” and evolution.
Terah LyonsPaula Goldman, global lead of the Tech and Society Solutions Lab at Omidyar Network, is championing programs to bring this kind of thinking to the development process and to computer science education.
She and Terah Lyons, who leads the Partnership on AI, will lead a discussion about how to build an ethical operating system at next month’s Techonomy 2018.
The three-day retreat will be a very intentional conversation about consequences — how to anticipate them, how to guard against them, and how to react to them when they do inevitably come. It is a crucial step forward on the path toward a more trustworthy future.-
Francisco Gimeno - BC Analyst All revolutions have consequences. The 4th IR will have them too. We need to prepare for them. The future can be very bleaker very bright depending on how we work on this. Who will control AI, and the algorithms which are more and more enmeshed in our lives? How can we control certain outcomes not to make things worst for humans? Tech and ethics must be in hand in this era.
-
UK government calls on businesses to embrace the Fourth Industrial Revolution | ... (channels.theinnovationenterprise.com)UK MP Alan Mak has called on the country's private sector to play its part in the Fourth Industrial Revolution (4IR) as the country attempts to stay at the forefront of the global digital revolution post-Brexit.
Mak, who also works as parliamentary private secretary to the UK secretary of state for business Greg Clark MP, a noted "Remainer", said: "We're seeing an enormous shift in the amount of data available to businesses and their ability to analyze that data and use it to help them to push the boundaries of innovation and growth.
"Speaking at the launch of The Fourth Industrial Revolution Report 2018, Mak said: "We are at the start of a new technological era that will transform the way we live and do business in a way not seen since the invention of the steam engine or the printing press.
"This new digital revolution will transform every sector, from financial services and manufacturing, to agri-food and energy. It will transform every sector within our economy.
"Mak, who founded and chairs the 4IR All-Party Parliamentary Group at Westminster, said that the new report asked whether Britain could become a data leader in a post-Brexit world.
"My view, and the view of the British Government, is that it will and it must," Mak answered. "Embracing data is not just an option, it's absolutely vital."As every aspect of society moves toward digitalization, Mak argued that society as a whole needs to "move away from using the term AI" and adopt a "whole economy approach".
"In the future," he stressed, "every sector of the UK economy and within the economies of every country around the world will be a tech sector."For our country and other countries around the world, it will mean increasing productivity, growing wages, more jobs, stronger economic growth, but above all, rising living standards."
Citing The Fourth Industrial Revolution Report 2018, Mak said that businesses were starting to take data challenges seriously, noting that key data issues were now front of mind for CEOs and board members.
The 2017 version of the report noted that no CEOs were in charge of data issues – a figure that has grown to 15% in the 2018 edition.
Mak stated, however, that businesses approaching the data revolution would require a cultural shift in management and leadership, ensuring that the workforce is brought along on the digital transformation journey.
"We have to be very frank about which roles will be carried out by machines and which tasks will be carried out by humans," he noted.With demand for AI skills in the UK almost tripling over the last three years, according to job-searching website Indeed, Mak said that the UK government was aware that it needed to improve digital literacy across all strata of society.
"People will succeed in this revolution by being more human," Mak said, noting that the UK government was prioritizing STEM skills in schools, as well as introducing new technical-level qualification for the post-16 education sector, many courses of which have been scheduled to come online by 2020.
- By Admin
- 0 comments
- 2 likes
- Like
- Share
-
By Klaus Schwab
Founder and Executive Chairman, World Economic Forum
After World War II, the international community came together to build a shared future. Now, it must do so again. Owing to the slow and uneven recovery in the decade since the global financial crisis, a substantial part of society has become disaffected and embittered, not only with politics and politicians but also with globalization and the entire economic system it underpins.
In an era of widespread insecurity and frustration, populism has become increasingly attractive as an alternative to the status quo. But populist discourse elides – and often confounds – the substantive distinctions between two concepts: globalization and globalism.
Globalisation is a phenomenon driven by technology and the movement of ideas, people, and goods. Globalism is an ideology that prioritizes the neoliberal global order over national interests. Nobody can deny that we are living in a globalized world. But whether all of our policies should be “globalist” is highly debatable.
After all, this moment of crisis has raised important questions about our global-governance architecture. With more and more voters demanding to “take back control” from “global forces,” the challenge is to restore sovereignty in a world that requires cooperation.
Rather than closing off economies through protectionism and nationalist politics, we must forge a new social compact between citizens and their leaders, so that everyone feels secure enough at home to remain open to the world at large.
Failing that, the ongoing disintegration of our social fabric could ultimately lead to the collapse of democracy. Moreover, the challenges associated with the Fourth Industrial Revolution (4IR) are coinciding with the rapid emergence of ecological constraints, the advent of an increasingly multipolar international order, and rising inequality.
These integrated developments are ushering in a new era of globalization. Whether it will improve the human condition will depend on whether corporate, local, national, and international governance can adapt in time.
Meanwhile, a new framework for global public-private cooperation has been taking shape. Public-private cooperation is about harnessing the private sector and open markets to drive economic growth for the public good, with environmental sustainability and social inclusiveness always in mind. But to determine the public good, we first must identify the root causes of inequality.
For example, while open markets and increased competition certainly produce winners and losers in the international arena, they may be having an even more pronounced effect on inequality at the national level. Moreover, the growing divide between the precariat and the privileged is being reinforced by 4IR business models, which often derive rents from owning capital or intellectual property.
Closing that divide requires us to recognise that we are living in a new type of innovation-driven economy and that new global norms, standards, policies, and conventions are needed to safeguard the public trust. The new economy has already disrupted and recombined countless industries and dislocated millions of workers. It is dematerializing production, by increasing the knowledge intensity of value creation.
It is heightening competition within domestic product, capital, and labour markets, as well as among countries adopting different trade and investment strategies. And it is fueling distrust, particularly of technology companies and their stewardship of our data.
The unprecedented pace of technological change means that our systems of health, transportation, communication, production, distribution, and energy – just to name a few – will be completely transformed. Managing that change will require not just new frameworks for national and multinational cooperation, but also a new model of education, complete with targeted programs for teaching workers new skills.
With advances in robotics and artificial intelligence in the context of ageing societies, we will have to move from a narrative of production and consumption toward one of sharing and caring.Globalisation 4.0 has only just begun, but we are already vastly underprepared for it. Clinging to an outdated mindset and tinkering with our existing processes and institutions will not do.
Rather, we need to redesign them from the ground up, so that we can capitalize on the new opportunities that await us while avoiding the kind of disruptions that we are witnessing today.
As we develop a new approach to the new economy, we must remember that we are not playing a zero-sum game. This is not a matter of free trade or protectionism, technology or jobs, immigration or protecting citizens, and growth or equality. Those are all false dichotomies, which we can avoid by developing policies that favour “and” over “or,” allowing all sets of interests to be pursued in parallel.
To be sure, pessimists will argue that political conditions are standing in the way of a productive global dialogue about Globalisation 4.0 and the new economy.
But realists will use the current moment to explore the gaps in the present system and to identify the requirements for a future approach. And optimists will hold out hope that future-oriented stakeholders will create a community of shared interest and, ultimately, shared purpose.
The changes that are underway today are not isolated to a particular country, industry, or issue. They are universal and thus require a global response. Failing to adopt a new cooperative approach would be a tragedy for humankind. To draft a blueprint for a shared global-governance architecture, we must avoid becoming mired in the current moment of crisis management.
Specifically, this task will require two things of the international community: wider engagement and heightened imagination. The engagement of all stakeholders in sustained dialogue will be crucial, as will the imagination to think systemically, and beyond one’s own short-term institutional and national considerations.
These will be the two organising principles of the World Economic Forum’s upcoming Annual Meeting in Davos-Klosters, which will convene under the theme of “Globalisation 4.0: Shaping a New Architecture in the Age of the Fourth Industrial Revolution”. Ready or not, a new world is upon us.-
Francisco Gimeno - BC Analyst Hear, hear! This reading says in few hundred words what all of this is about on the 4th IR and globalisation. All have to get prepared for the challenges which are already around us (not the future anymore). Even more amazing is that in times of disruption like these, new tech is coming to make the disruption healthy and a reason to create something new, based on humanness and a new relationship with nature and tech.
-
-
OriginTrail (TRAC) Takes Part in EUR 20 Million Digital Transformation Project f... (investinblockchain.com)
OriginTrail (TRAC) Takes Part In EUR 20 Million Digital Transformation Project For The European Agri-Food Sector
11 hours ago By Editorial Staff 0
Blockchain technology is becoming part of a large-scale EU-funded project, intended to boost the digital transformation of the European countryside!
OriginTrail (TRAC), a blockchain company developing data exchange protocol for interconnected supply chains, has become the go-to blockchain solution for 108 organizations, from 22 different European countries, involved in a SmartAgriHubs project, managed by Wageningen University & Research, the world’s leading provider of scientific education in the healthy food and living environment domain.European Farming Sector Needs the Blockchain
By 2050, 70% of the world’s population will live in cities. As a result, cities will, to an increasing degree, face issues concerning sustainability and quality of life. This will have an impact on food security, mobility and logistics, the availability of water, dealing with raw materials and waste, health, and well-being.
The European Commission recognized the importance of facing this challenge by creating a competitive advantage for non-urban regions, including the initiatives connecting the countryside with the Information & Communication Technology (ICT) sector.George Beers, Project Manager at Wageningen University & Research and SmartAgriHubs Project Coordinator:SmartAgriHubs will not only increase the competitiveness and sustainability of Europe’s agri-food sector. It will become the 4th industrial revolution that will strategically re-orient the digital European agricultural innovation ecosystem towards excellence and success. Together with our partners we believe SmartAgriHubs will unlock the potential of digitization by creating a pan-European network of Digital Innovation Hubs, organizing an inclusive ecosystem around them and fostering them to achieve their full innovation acceleration capacity.
EUR 20 Million from the European Union to Develop and Stimulate the Adoption of the Technology
The project aims to involve 2 million European farms and introduce 80 new digital solutions onto the market.Digital Innovation Hubs are spread across the European Union with a regional approach, focused on 9 regional clusters, building a network covering all EU regions and connecting technology, business and industry-specific expertise with relevant players.
The consortium consists mostly of small and medium enterprises, but also of consultancies and private banks rooted in agriculture. It also plans to involve the broader community, with activities such as hackathons and datathons.
Among other partners in the project are other R&D companies as well as public entities, such as UK’s Innovation for Agriculture, Schuttelaar & Partners, Austrian Chamber for Agriculture, French region of Loire, FIWARE.
Blockchain technology provided to partners via the OriginTrail protocol is the underlying technology for building trust in supply chains that SmartAgriHubs is addressing.
The blockchain is bringing the decentralization of trust, through end-to-end visibility and better guarantees of integral quality and safety. It is also addressing European consumers’ needs for traceability and transparency.With this undertaking, the results of OriginTrail’s pilot projects will be disseminated across the network of DIHs and beyond the agricultural sector.
In the second stage, the consortium will also publish open calls for both private and public entities to utilize the technology on further use cases.In the graphic below, you can see the main technological and consumer trends that are vital to the project, including robotics, biotechnology, IoT, machine learning and the blockchain.
Source: https://ec.europa.eu/futurium/en/system/files/ged/wolfert_-_smartagrihubs_dih_wg_brussels_21feb2018.pdf
Žiga Drev, Co-Founder of OriginTrail, on the importance of this achievement and acknowledgment:“OriginTrail’s team has been actively solving supply chain transparency and efficiency challenges since 2013. We have worked closely with farmers, producers, supply chain companies, and end consumers to fully understand what the challenges in food supply chains are.
This approach was welcomed by stakeholders. We are proud that the European Commission appreciates these efforts, too.OriginTrail solutions have been presented to European Commission officials on several occasions, including the anti-counterfeit IoT traceability proof-of-concept and the Smart Villages initiative.
The response was always encouraging. In association with the Wageningen University and the SmartAgriHubs project, we are making a significant step towards protocol adoption and will be working on tangible use cases.”Key Facts at a Glance
- Instrument: Horizon 2020, DT-RUR-12-2-18: ICT Innovation for agriculture
- Contribution of the European Union: €20 million
- Duration: 4 years, 2018-2022
- Consortium: 108 initial partners, possibility to extend through open calls
- 140 digital innovation hubs, 9 regional cluster & 28 flagship innovation experiments
- Bridge public-private funding by mobilizing additional funding (30 M€)
- Strong focus on establishing a sustainable network of DIHs with viable business models and investment funds
-
Francisco Gimeno - BC Analyst A powerful idea backed by the EU. We need more of these projects to create strong and real use cases in this early stage of Blockchain adoption.
-
- Africa cannot afford, nor does it have to, miss the possibilities of the 4th Industrial Revolution.
- By 2030, Africa will have the world’s largest potential workforce. What if every one of them was connected, digitally skilled and an empowered digital consumer and/or producer?
In just 2 centuries, the industrial revolution globalized the economy, with new forms of energy, organization, production and distribution capabilities; propelling industrialized countries into a golden age of prosperity.
But it also powered slave trade, colonization and two World Wars.Over the past few decades, the digital revolution made it possible for companies such as WhatsApp and Snapchat to reach billion dollar market valuations within 2 years and a few dozen staff, something that used to take the best companies of the industrial revolution 20 years and hundreds of thousands of staff to accomplish.
Today, the world stands at the cusp of the 4th industrial revolution, with the rapid convergence of technologies in the digital, biological and physical domains.
From the onset of the agrarian revolution in 10,000 BC, it took 6,000 years to double the world’s GDP. When the industrial revolution kicked in at around 1760, it took less than 100 years. With the computing revolution around the 60s, the time was reduced to less than 15 years.
The 4thindustrial revolution –digitally smart factories, cities and entire economies connected to the Internet-- has demonstrated that the rate of change will continue to accelerate.
Africa has essentially missed the opportunities of the second and third industrial revolutions. The continent is home to 16.3% of humanity but home to less 1% of the world’s billion dollar companies and only about 4% of global GDP.
Africa cannot afford, nor does it have to, miss the possibilities of the 4th Industrial Revolution.
Africa’s best opportunity to bridge the gap with the rest of the world is through unity of purpose: if Africa unites, connects everyone, empowers the young generation, embraces change and thinks exponentially, it could bridge the development gap with the rest of the world in around a decade.
UniteHere’s an example of what needs to be done. In 1994, MTN Group, a telecoms operator, was born in South Africa, while at the same time Amazon was born in the United States. 25 years later, MTN is among Africa’s top 5 companies valued at $9B (as of Sept 2018) while Amazon became the world’s second trillion-dollar company this year after Apple.
What made Amazon grow 100 times faster than MTN? MTN operates a product-focused, linear business model in 24 different, low income markets, with different policy and regulatory constraints.-
Francisco Gimeno - BC Analyst We strongly advocate for an Africa dedicated to the 4th IR. There are positive signs already. Urban youth debating about crypto and how blockchain can change business and society in their own countries. African based platforms for crypto and blockchain. Even governments (Uganda, Kenya, Nigeria, South Africa, Sierra Leone...) debating officially. The African revolution for the 21st century has arrived.
-
he fourth industrial revolution (4IR), which began as an initiative to combat challenges faced by the manufacturing sector, has grown to include almost every sector and is set to influence every conceivable aspect of business.
Technologies that are shaping the so-called Industry 4.0 (the industry emerging out of 4IR) include robotics, the Internet of things (IOT), artificial intelligence (AI) and big data.
While in a manufacturing context these technologies are shaping the 'factories of the future' (a Web of interconnected machines creating products that are pre-programmed, while all the time uploading process data, completely without the involvement of any humans), these same technologies are also constructing most other industries.
The role of big data in the fourth industrial revolution is critical. In fact, some argue that big data is the fourth industrial revolution.
Understandably, there are concerns that autonomous machines will increasingly take over tasks that humans have always performed, leaving many jobless. However, it seems more likely that many new jobs will be created as the power of data is harnessed and used in a meaningful way.
The sheer pace of these emerging technologies, with associated demographic and socioeconomic impacts, is rapidly transforming industries and business models, completely redefining the skills that employers need.
Four ways in which work will change in 4IR:
Why we work
For generations, grown-ups have asked children: "What do you want to be when you grow up?" In the future, the concept of a 'job-for-life' will be met with blank stares. Rather than asking what you want to be, the question will be:
"What do you really enjoy doing?" This means learning will not only be lifelong in terms of skills, but also, at a deeper level, that learning will need to focus internally, to really understand ourselves.
This will be essential as 4IR is likely to impact our lives in how we communicate, how we produce, consume and even our identities. For this reason, it will be essential to understand why the work we do matters, what value it adds, especially in a world where automation and artificial intelligence is woven into every part of our lives.
What we do
Employees will have to adopt a life-long learning approach to work, upgrading skills continuously, either to ensure they remain at the cutting edge of their field or to keep pace with the technological advances unfolding across industries.In fact, constant upskilling will be more important than work experience gained or tenure.
Occupations traditionally regarded as technical will require additional skills in creativity and interpersonal skills. As the ecosystems in which they operate evolve, even jobs that are generally expected to be less affected by technological changes, such as marketing, are likely to require very different skillsets in just a few years from now.
How we work
Disruptions brought on by technology, such as machine learning and robotics, are likely to, rather than completely replacing existing occupations, substitute tasks previously performed as part of these jobs, freeing employees to focus on new tasks. A digital economy will drive new ideas, new information and new business models that are continuously expanding, combining and changing into new ventures and businesses.
Where we will work
The blending of physical and organisational boundaries will continue, requiring greater agility not just in innovation, but also operationally. Rather than being confined to a single space, work will be outcome-based, leveraging flexible arrangements as well as online talent platforms. Work will increasingly be understood as what people do, not where they do it.
This means businesses will collaborate with independent professionals and freelancers, often through digital talent platforms. It is likely that a new form of 'labour union' will emerge, which will require new policies, regulations and protections to newly emerging occupational categories and models of work.
Exactly what the world will look like in 20 or 50 years' time is still not clear, but the words of Klaus Schwab (Founder of the World Economic Forum) serves as a signpost to all of us.
"The fourth industrial revolution can compromise humanity's traditional sources of meaning: work, community, family, and identity, or it can lift humanity into a new collective and moral consciousness based on a sense of shared destiny. The choice is ours."-
Francisco Gimeno - BC Analyst Haven't you read Schwab's book on the 4th IR yet? Then read it. 4th IR is coming like a speed train, and it is in our hands how to transform our lives and society, in this case our work, to create a better world using technology for our benefit. We can do this, or we can fail and be dominated by algorithm technologies and become drones of those who could control the same tech to get at the top.
-
-
As a Director at Verizon’s 5G Labs across the country, I help foster the cutting edge work across our ecosystem of co-working communities and Innovation Centers. I’m fortunate enough to witness across them how education and the evolving needs of the workforce are creating new dynamics to build the talent of today and tomorrow.
TALENT
Over the past few years I’ve spent a lot of time on university campuses working with (and learning from) the innovators of tomorrow before they enter the workforce. The idea behind it is two-fold:
Each new class is native to a newer and more inherently digital world. Born with expectations and ideas from that generation that we could never truly understand, we love to listen to them to hear their vision for how they want to change the world.
We believe that early investment in people and talent (time, mentorship, teaching, support) pays dividends in the long run. Whether they work for Verizon as an employee someday or with Verizon as a partner.
Along the way, we have observed a few fundamental shifts in how educators and programs are approaching curriculum innovation to keep up with the rapid digitization of our economy and subsequently our workforce. Here are a few examples:- Some university programs have looked externally to the market and observed what new digital tools are being used by both enterprises and the startups looking to disrupt them. They then integrate these tools into their class offerings. For example: Prototyping 101 with InDesign or Intro to VR/XR with Unity and Unreal Engine.
- Others have come to a realization that the best talent lies at the intersection of disciplines. Rei Inamoto (ex chief creative officer at AKQA) coined the notion of needing the trifecta of a hacker, hipster and hustler to build an efficient and successful team -- technical meets design meets business.
- Some schools have evolved this notion to combine two of the three to create hybrids: for example the creative technologist or the the technical MBA. My most successful hires have been some of these!
- The best university programs combine the first two approaches and are thinking about a mixture of entrepreneurial and intrapreneurial concentrations. Empowering the “creative technologist” to build a product a customer will love is great, but enabling them to navigate a cross functional team in a fortune 100 is very different than navigating that same process at a company of 100. To prepare for this some of my favorite faculty have created classes in “Corporate Entrepreneurship” and programs like “Startup Studio”.
Time will tell which approaches are best suited to adapt with the market but I’m excited every time I come back to campus and see schools pivoting alongside the startups that spin out of them.COMPANIES
On the hiring end of this talent is the startup, the company, the non-profit. More than ever, the large incumbents are trying to disrupt themselves before they are the recipients of startup disruption. They are creating new ventures and spin outs to meet new technological change head-on before it happens.
But interestingly enough, they are also investing in and partnering with those potentially disruptive startups in new ways. And an entire industry has been made from bridging those two seemingly unlikely partners through things like corporate accelerators or venture studios (I’ve run a few awesome ones with TechStars, R/GA, and The Hive).WHAT’S NEEDED
These two trends, creative approaches to curriculum innovation and creative approaches to corporate/startup partnerships, are presenting a unique moment in both learning and work. Developing students who can operate across tech and business AND startup and enterprise is a goal schools and institutions should strive to achieve.
As we move through the Fourth Industrial Revolution, accelerated by 5G technology, our ecosystem and community has an unprecedented opportunity to drive better societal outcomes by encouraging these intersections through education.
My favorite book is Clayton Christensen’s “Disrupting Class”. It talks about how education and learning should be approached with the same frameworks regarding innovation as industry: applying learnings from the Innovator’s Dilemma to reinvent the classroom.
I believe that we are in a technological and societal moment where the world can change for the better if we embrace new approaches and enablers like 5G.
They can empower educators to innovate further and allow industries that will be impacted by the Fourth Industrial Revolution to accept a new wave of students armed with new approaches to developing products in a 5G world.
I’m excited to watch as this next class of students, a fresh perspective unlike the class before them, drive a revolution that will create not only new economic growth but shared societal prosperity.
And it’s up to us to give students the tools, today, to make that change happen tomorrow.
For related media inquiries, please contact [email protected]
About the author(s):
Christian Guirnalda leads Verizon's 5G Labs across NY, LA, DC, Cambridge, and SF which focus on educating and engaging innovators to build next generation experiences through Verizon's 5G Incubator.
Prior to this role, he was a part of Verizon Ventures where he invested in early stage companies, funds, and accelerators.
Christian received a double major from Carnegie Mellon in Computing Technology and Manufacturing Management as well as an MBA from the University of Michigan.-
Francisco Gimeno - BC Analyst 4th IR is going to be as disruptive as the former Industrial revolutions. But in this case, the new technologies have the power of fundamentally transform personal lives and society. Nothing and nobody will be unaffected. New generations are already preparing for it having been born in a digital era. Let's prepare ourselves to use the tools we have to join in.
-
KUALA LUMPUR (Oct 29): The third ASEAN Insurance Summit, which will be held on November 28 at Sasana Kijang, Bank Negara Malaysia, here, is expected to attract over 250 participants comprising industry leaders, regulators, stakeholders and industry players in the region.
Organised by the ASEAN Insurance Council (AIC), the summit is supported by the Life Insurance Association of Malaysia (LIAM) and the General Insurance Association of Malaysia (PIAM) under the guidance of the ASEAN Secretariat.
In a joint statement today, the parties said the summit, themed “The Fourth Industrial Revolution (4IR) and its Impact on the ASEAN Insurance Industry”, will explore how innovative approaches and technologies would transform the way insurance business is conducted.
“Given the changing financial landscape due to the revolution, we believe that the summit plays an important role for the ASEAN insurance authorities, regulators, insurers, practitioners and other stakeholders to exchange views on new developments, technologies and innovation as well as their impact on the regional cooperation and integration in the insurance and reinsurance sectors,” they said.
For more information on the ASEAN Insurance Summit, please visit: http://aseaninsurancesummit.com/.- By Admin
- 0 comments
- 2 likes
- Like
- Share
-
Recommended: 4IR The key to our future on Earth, and beyond? - University Worl... (universityworldnews.com)If universities exist in part to solve the most pressing problems of our time, they have their work cut out for them.
The reality is that if humans maintain their current rate of consumption, which already exceeds the capacity of Earth to renew itself, we will soon need two planets to live off, National University of Singapore mechanical engineering professor and circular economy guru Seeram Ramakrishna told the annual South African Technology Network (SATN) Conference held in Durban, South Africa, last month, which explored the role of universities in the fourth industrial revolution (4IR).
Against the backdrop of this shocking but real scenario, Dr Adriana Marais, theoretical physicist and one of the 100 Mars One Project astronauts, not only assumes the role of intrepid scientist, pushing the boundaries of knowledge, but she also becomes a pioneer of a new frontier for human habitation.
Delivering the opening address of the conference, Marais outlined the ways in which the creation of a biosphere on Mars presented humankind with an opportunity to fundamentally re-examine the way it currently uses resources.
Referring to the possibilities of asteroid mining as an example, she said conventional mining beneath Earth’s surface was disrupting delicate ecological systems resulting in dangers to miners and at a huge cost to the environment. “We cannot be increasing population and urbanising without changing our systems,” she said. “We have to change the way we extract resources.”
Describing Marais as “our champion” during his keynote address at the SATN conference, Ramakrishna said humans were eyeing planets such as Mars partly because of unsustainable consumption driven in part by a linear economic model in which resources were mined, used and then discarded, with a strong link between consumption of natural resources and economic growth.
Soaring consumption
“Why do we need planets like Mars? … Per capita consumption is soaring. Food consumption is increasing as well as manufactured products … Humans have consumed more resources in the last 50 years than in the last 30,000 years,” he said.
A byproduct of a linear economy is, of course, waste. In the past 50 years, according to Ramakrishna, humans generated more waste than they have in all of previous history.
He said in many countries people were against the idea of repairing goods and sought rather to replace them with new items. The result is that in 2016, 44.7 million metric tons of e-waste was generated – equivalent in weight to 4,500 Eiffel Towers.
But all is not lost. Speaking on the topic of “The circular economy and the Fourth Industrial Revolution – Role of universities”, Ramakrishna’s key argument was that the 4IR and the new technologies it brings – robots, automation, the Internet of Things, big data analysis, machine learning, artificial intelligence, cloud computing, 3D printing and nanotechnology – hold the key to future sustainability and are in fact enablers for the circular economy – a model in which resources are kept in use for as long as possible, and then recovered and regenerated.
Ramakrishna, a leading academic in nanotechnology, is also leader of the National University of Singapore’s Circular Economy taskforce with members drawn from across the university, and various national research institutes under the Agency for Science, Technology and Research, Singapore. He is an also advisor to the National Environment Agency of Singapore on Industry 4.0 and Circular Economy.
He said innovation and technology adoption will decide which countries succeed economically in the longer term. “While 20% of GDP [gross domestic product] growth is driven by labour and capital, 80% of growth is shaped by how fast you adapt and market them,” he said.
Greater productivity
For Ramakrishna, the idea behind fourth industrial revolution technologies is to make us more productive and effective. “AI [artificial intelligence] will not only mean intelligent automation and machine vision but will influence cognitive systems and deep learning. Currently we learn and use technology; in future, technology will learn about us, and help us to be more productive and responsive,” he said.
Recent changes to the way in which waste was being managed globally – for instance China’s refusal to accept waste from the rest of the world; the European Union’s policies on single-use plastic; and the practice of generating metals from recycled electronic waste through ‘urban mining’ – were changing the dynamics of the global economy and presented major opportunities for emerging economies and their universities, he said.
According to a 2015 book Waste to Wealth, by major management consulting firm Accenture Strategy, the circular economy could generate US$4.5 trillion of additional economic output by 2030. Ramakrishna said it could also create around 65 million new jobs around the world by 2030, all focused on sustainable industries.
These initiatives should be considered by South African higher education institutions, which should adopt principles from the fourth industrial revolution and the circular economy to respond to societal questions and benefit people on the ground, he said.
Cleaner future
“Together, the circular economy and the 4IR are changing the world. They present a vision of a cleaner future, with huge opportunities,” he said.
During the discussion that followed, Ramakrishna admitted that consumerism was “rampant” in parts of Asia – the product of a sense of entitlement that was unlikely to change.
“When you talk to people, they say they are entitled to a better lifestyle and they translate that into access to more material goods. That’s the general perception or mindset of East Asians where I come from.
If you tell the population who are the first generation to be born into the technological environment that they can’t have those goods, it’s not likely they will buy into the idea. The best way is to come up with a new way of designing products and services and those new ways are 4IR technologies which need to be embraced.”
Ramakrishna reiterated that South Africa – despite its problems of inequality – was not only included in this revolution but could take advantage of it with positive results. “4IR technology is slowly penetrating a number of domains of human living.
In a way, it’s a great opportunity, a great equaliser but we need to embrace it. If you embrace it, you can go the path of equalising. If not, then the gap is likely to be wider.”
For Ramakrishna, the “next way” is the circular economy tied to the United Nations Sustainable Development Goals (SDGs). “That will be the next way. That addresses issues of inequality, resource utilisation and climate change and changes in living conditions. … Embrace the UN SDGs and the circular economy and fourth industrial revolution will fit in with that.
We can only really see 10 to 15 years ahead. We need to contribute to the community and make a difference, get out of our loneliness and be happier.”- By Admin
- 0 comments
- 1 like
- Like
- Share
-
Cape Town - Minister of Higher Education and Training Naledi Pandor said she would soon ask the National Treasury for more funding - and set up a ministerial committee - to ensure South Africa does not lag behind in the fourth industrial revolution (4IR) race.
This is in addition to her announcement at a recent BRICS 2018 Future Skills Challenge in Midrand that technical and vocational education and training would get a R2.5billion boost to equip it with 4IR skills.
At the closing ceremony, Pandor said to ensure 4IR success, co-operation among BRICS nations - Brazil, Russia, India, China and South Africa - was vital to improve skills, strengthen academic ties and enhance student mobility. Sharing knowledge, research and innovation between academics in BRICS countries could strengthen integration, she said.
“If universities in BRICS collaborate successfully on research and teaching in student and staff exchanges, we can make a significant contribution to global knowledge.
” She said the BRICS Network University was an education project underpinned by the 4IR, which had major implications for business and education.
BRICS Network University is a group of 60 higher education institutions from member countries - 12 from each of the five BRICS countries - established by BRICS education ministers to engage in educational and research initiatives across themes that include: university linkages and higher education mobility; technical and vocational education and training (TVET) exchanges; and sharing of education statistics and learning assessment experiences.
“We’re in the age of the pervasive influence of emerging technologies and artificial intelligence and need responsive skills and a development research focus and investment to benefit fully.
Through its research partnerships, the BRICS Network University can help reduce the poverty, unemployment and inequality that characterise many countries in the developing world,” Pandor said.
It is crucial that South Africa introduce these 4IR skills as two-thirds of the children at primary school are likely to end up working in jobs that are not in existence today.
While she praised universities for developing 4IR skills, Pandor said much still had to be done to equip the country’s technical and vocational education and training colleges with related infrastructure. Ensuring that schools, colleges and universities prepared adequately for the 4IR was a critical requirement, she said.
Pandor said she would appoint a ministerial committee to address 4IR concerns. “Its remit will be to assess what is being done at different universities in the country and then to advise as to what my department should do to put us on a good edge in terms of participation in the digital revolution.
”She added it was high on her agenda to provide the infrastructure to bring colleges up to speed, so they could respond to the demands of new technology and contribute to employment creation and enterprise development in South Africa - but not all of her efforts would require funding, as she sought to draw on the existing experience of institutions in this area.
The minister added the challenges were not insurmountable and she was impressed at the steps being taken to ensure that South Africans were joining the digital innovation race.
“The Gauteng Department of Education’s introduction of technology to all schools has been a really bold step. We should encourage more provinces to do so.
An older initiative in the Western Cape has also had a positive impact. All our universities are doing more, boasting digital facilities in libraries, and wireless is being used widely; certainly, they’re ahead of colleges,” she said.
Commenting on the BRICS 2018 Future Skills Challenge, Pandor said it was a unique initiative, enabling co-operation among the youth, through BRICS, to find solutions to challenges:
“The focus on future skills differentiates this skills challenge from all other existing international skills challenges and competitions”.
This article first appeared on www.universityworldnews.com.
@WeekendArgus [email protected]- By Admin
- 0 comments
- 1 like
- Like
- Share