Blog

2016 | Feb | 10

The term “Internet of Things” (IoT) implies the phenomena that more and more electronic devices are connected with a local or a global network and generate data. Such networks do already exist and are generally called either intra- or internet, depending on the scale. However, traditionally they exist in order to connect people with each other and the world. Internet of Things devices also connect to such a network, but they are self-sufficient and are usually not operated by people. One especially interesting application for this trend is the industry of factory automation which makes use of computers, forming a network, to improve the assembly and other parts of the company. In Germany the trend of integrating IoT-devices into production sites is called the “fourth industrial revolution” (Industry 4.0). In a broader sense software and many rather unintelligent small computers, that are embedded everywhere in the factory, compound to accelerate smart manufacturing.

Automation, “the process of converting the controlling of a machine to a more automatic system, such as a computer” (wikidiff.com), started in the middle of the twentieth century in several industries because engineers tried to improve the quality and uniformity of their products, and wanted to reduce the cost. However, over the years, products became much more complex, and therefore the production became more complex as well. Nowadays it is possible, for instance, to customize cars to a very high degree. Consequently, this leads to both intricacies and opportunities for whole industries and especially for the automation industry. Just imagine when hundreds or thousands of connected machines, sensors or autonomous vehicles exist inside a factory: How do they behave with each other and the humans who interact with them? A new level of complexity will appear. Systems will interact with systems in order to abstract the complexity and enable humans to cope with them. With regard to this technical challenge, three factors are essential: hardware, software and cloud-computing.

Notably during the last ten years, an important phenomenon has occurred in the IT landscape due to the rise of smartphones. Computer chips and other silicon based components were shrunk, improved in speed and power consumption and fell in price rapidly – Gordon Moore, co-founder of Intel, described this correlation over fifty years ago - famously called “Moore’s Law”. However, the processor market became much more crowded because traditional “silicon companies” such as AMD or IBM span their production off into a contract manufacturer and now act as chip designers. In addition to that, many new chip designers such as Mediatek or Rockchip emerged while traditional IT companies started to focus on that topic too (e.g. Samsung and Apple). As a result, this new competition leads to better products and lower prices. Hence, one can acquire a whole computer (processor, memory and printed circuit board) for as low as $5.

In order to build a complex connected system, another component is essential: software. Enterprise software such as SAP, the office suit from Microsoft or databases for doing big data analytics came a long way and are now very common. Nevertheless, different technologies are now also needed to power the Internet of Things. First of all, for getting a low-power computer running, an operating system is usually required. Many organizations and entrepreneurs are choosing Linux because of the high portability on almost any computer platform and the big software ecosystem around it; however, there are also other options such as Windows or “real time operating systems” (RTOS) available that are suitable depending on the use case. In order to connect computers to a network, protocols and other technologies are needed. While new ones are being developed, many existing internet technologies are sufficient because, form a technology standpoint, there is no difference between connecting humans through computers to computers and connecting computers to a network. One very widely used programming tool is “node”, which uses the JavaScript programming language, the quasi lingua franca of the web. However, as JavaScript is traditionally run inside web browsers, node makes it possible to run it directly on computers. This is especially interesting as this language is the most widely used programming language right now on to the leading open-source software repository GitHub. Node enables programmers to effortlessly integrate other web technologies such as the HTTP protocol or the makeup language HTML into their projects and make these easily deployable across various devices. Broadly standardized programming languages and tools are important, however in order that IoT-devices are spread in critical dimensions, it is essential to guarantee security. Classical anti-viral software consumes to many resources. Thus, engineers and researchers have to pioneer totally new security techniques in order to generate trust in the system.

Another important point to consider is to store data from all those computers in a way that it is possible to make use of the information. Many organizations use the “cloud” for this use case, which makes it possible to outsource processing and storing capabilities from IoT-devices to much more powerful computers through the internet. Cloud computing can be integrated inside the company, which may be the most secure way, or it can be rented from companies such as Amazon, Google or Microsoft. Notwithstanding, this technology is not only interesting from an economic standpoint but from a technological one too: Big cloud service providers offer not only very capable standard computers but also special ones that make use of graphic processors (GPU) or logic processors (e.g. FPGA). These capabilities lead to new possibilities such as deep learning, which is a technological representation of brain-like thinking. One important application is image recognition; although one can use this technology for analyzing all kinds of data.

Due to the aforementioned technologies a wide range of possibilities emerge. McKinsey expects that, while being only half as capex-intensive as prior industrial revolutions, Industry 4.0 will “yield a productivity improvement of as much as 26 percent” (McKinsey Industry 4.0 report p.7). By analyzing the information, generated by the Internet of Things, managements can decide on future investments etc. much more knowledgeably and wisely. Engineers have the option to integrate machines into manufacturing sites in a much more flexible manner and they have the possibility to improve the quality, yield and cost of the products. Product engineers and designers can gain more insight and consequently are able to create products that suit the customers’ needs better. In addition to that, there is also the opportunity for completely new products or ones with a new design pattern. For instance, car manufacturers such as BMW are using Ethernet, “a family of computer networking technologies for local area networks (LANs)” (Wikipedia), for connecting sensors and infotainment devices inside the car. Furthermore, future cars will also be able to connect to each other and the cloud to improve driving security for passengers and pedestrians.

In conclusion, the dissemination of Internet of Things devices provides interesting opportunities for both consumers and producers. Nonetheless, this shift may change the way production works and it might be a disruptive change for many market participants. Anyhow, companies are still exploring the options and industry-wide standards have not been developed yet.

sources:

2014 | Dec | 06

There has been a lot going on with new technologies for personal computers. Whole industries have been shifting and changing. Some companies seem to have a master plan others are losing ground or even start to vanish.

In this stream of uncountable ideas around the topics IoT, mobility and personal computing only few can be considered as bold.

One particular promising subject is the concept of modularity for mobile devices.

Modularity has always been a key approach for military-airplanes, ships and for everything that is expensive to purchase and even more cost-intensive to maintain. Desktop PCs are to a certain degree user-upgradeable, however the high frequency of innovation and the lack of long-lasting standards confronts consumers' desire to stay on the verge of the latest technology with their existing hardware. Therefore, it is not easy for them to keep their personal computers upgraded in the long run.

As the majority of the PCs have become more mobile over the years and finally ended up as smartphones, tablets or wearables, the size of those devices makes it even more difficult for modularity as there is rarely any space to fit interconnectors and other electronics.

In addition to that the industry in general is not asking for such approaches because the lifecycle of modular computers would obviously be longer compared to traditional ones which, as a result, would restrict the profit of the producers. One could argue that some manufacturers could shift to producing modules as an alternative but many of them are only engineering the finished product (OEM) and not the components (cameras, storage, SoC's etc.).
However, manufacturers that are investing in components and start-ups could benefit from this technology trend.

For the field of mobile phones, the advantages of modularity are clear:

In comparison to traditional smartphones that are often replaced after one to two years, modular phones would have a much longer lifecycle of about five years. This is possible because of the option to upgrade only those parts of the phone that lack performance and therefore limit the satisfaction of the user.
Therefore, consumers would save a lot of money during that period, even though the initial cost of the device might have been higher in comparison to "out of the box"-ones.
Moreover, this concept would lead to a massive reduction of waste because less electronic components would have to be produced. Plus, some modules could even be shared.
Imagine each of your family members and friends would own a modular phone and you decide to buy a really good camera-module: This camera module could be shared across all of them. Thus this technology opens completely new markets and aftermarkets.

Another major advantage for many consumers is the possibility of customization. People like to design their most personal computers truly personal. However, this question for unique smartphones cannot be answered by the market yet given that there are simply not enough companies that are engineering those.

And that is where modularity really shows its benefits:

First and foremost it is breaking the barrier for founding a company designing hardware for mobile computers. In a sense starting a module start-up is more like starting a software start-up than a traditional hardware business. This is true because a module is less complex than a whole smartphone or tablet.

Plus, many elements of the shell can be produced with low-cost 3D-printers. For this reason, some parts of the production could shift to small workshops. Due to the fact that many more people take part in the creative process of designing a smartphone one can expect quiet a lot more innovation happening.

Unfortunately, there is one major obstacle left - someone has to develop the software and hardware so that the modules work together in a useful way.

One company that is trying to do this is Google:

Their Advanced Technology and Projects group has introduced "Project Ara", a fully modular phone concept where not only some components are swappable or upgradeable, but all of them are. It is even possible to swap non-vital (CPU; RAM) modules while the device is running (hot swapping).

Googles major goal for this was to introduce standards. For instance, there has to be a standard for the way modules connect with the frame and for the shape and size of those modules. They have done so by introducing a Module Developer Kit (MDK). Developers can check compatibility of their module with the frame (endoskeleton) and software.

A big obstacle was to reduce the overhead (additional weight and thickness) in comparison to traditional mobile phones. They managed to reduce it "down to around 25% across the board; PCB area, device weight, and overall power consumption" (Paul Eremenko, director of Project Ara). They are off the opinion that consumers would sacrifice the additional weight and size in return of having this new technology.

In addition to that no mobile operating system supported modularity at this point in time. Luckily the Linux kernel that is the core of many modern operating systems supports modularity. Thus similar to drivers for the USB standard the team behind Project ARA developed standard drivers for e.g. Loudspeaker-modules. Every group of modules has some similarities and therefore one driver for all Loudspeaker-modules is a smart idea. Of course modules of one group can have differences too, therefore the developers of those modules are free to program special drivers for their unique product.

All in all, this concept has the potential of changing the market once again after the iPhone.


By the time this article was written the Project ARA team had already revealed a working prototype running the Android OS.

sources:

2014 | Apr | 17

Last time I came across something very interesting.

But let me start from the beginning: Wolfram Research a English-American private company, which is primarily specializing in advanced computation software, is well known for their program Mathematica and their search engine Wolfram Alpha. The founder and CEO Stephen Wolfram and his crew of talented programmers and mathematicians know how to wow people.

When they launched Wolfram Alpha everyone knew that it has a lot of potential. The reason is that this search engine, unlike others, displays the actual answer to your question instead of listing the most appropriate websites. Being that forward-looking, it is very useful and productive for doing research.

Now something different is coming:
Several month, ago Stephen Wolfram first showed the world his new project:

The Wolfram Language.

It is a knowledge-based and symbolic programming language which essentially is based on mathematic formulas. In order to reach a large number of people it is possible to use natural language in addition to symbolic code.

Earlier projects from Wolfram Research are built upon this programming language and since Mathematica exists for 26 years it is obvious to say that the Wolfram Language is in the works for even longer.

The real benefit of this new programming language is the fact that it is knowledge-based. Therefore, a huge amount of knowledge is built right into this language already.
This means that the programmer can make use of that knowledge and can create very complex software with only few lines of code.

sources: