36th Monthly Technical Session (MTS) Report

36th Monthly Technical Session (MTS) was held on July 21st, 2017. MTS is a knowledge sharing event, in which HDE members present some topics and have QA sessions, both in English.


The moderator of the 36th MTS was Iskandar-san.


The first topic was “Machine Learning: Intuition” by Nutt-san. He mainly focused on supervised learning. There are two phases of supervised learning, training and testing. Given input-output pairs, a good mapping from input to output is identified in the training phase. This mapping is used to predict new inputs in the testing phase. A predictor should have the smallest error possible on test data (not training data).

Nutt-san also emphasized that supervised learning works on the base of correlation, not causation. A predictor correlates input to output without knowing about causation, so we have to select input features carefully.

Nutt-san also explained the difference between deductive reasoning and inductive reasoning. To put it simply, in deductive reasoning, a conclusion is reached by applying general rules. On the other hand, in inductive reasoning, a conclusion is reached by extrapolating specific cases. Deductive reasoning is always correct, while inductive reasoning is not always correct. Machine learning is a kind of inductive reasoning. In relation to this, he reminded us that no algorithm works best for all supervised learning problem.


The second topic was “Spurious” by Fukutomi-san. Threads are utilized quite extensively in a project he was working on. Threads are usually executed concurrently and share resources. Sometimes, multiple threads accessing the same resources is not preferable due to concurrency issues.

In Java, one way to solve this is to synchronize threads. Another way is to utilize guarded blocks, which involves methods such as wait() and notify(). Unfortunately, there was a problem when Fukutomi-san was working with guarded blocks. It turned out that a thread can also wake up without being notified, interrupted, or timing out. This is called a spurious wakeup. He worked around this limitation by utilizing a true_wakeup flag.


The third topic was an explanation of a new component of an HDE service by Ogawa-san. He began by explaining the role of the new component in the HDE service. Then, he explained the technologies involved in the development of the new component. He developed the component using C++14 and Windows API, and he developed the installer program using C# 7 and .NET Framework 4.6.

Ogawa-san had to use C++ due to the component’s relationship with Windows' Local Security Authority Subsystem Service (LSASS). High-level features can not be used in core operating system processes such as LSASS. In his opinion, reporting events to the Event Viewer from C++ code is not ideal. He also explained his approaches to unit test and continuous integration.


The fourth topic was “Security Assessment with Amazon Inspector” by Jeffrey-san. Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.

Jeffrey-san explained how to use the service. First, Amazon Inspector AWS Agents are installed in the target Amazon EC2 instances. Second, Amazon Inspector Assessment Targets, which are collections of EC2 instances to be scanned, are defined. Third, Amazon Inspector Assessment Templates, which defines the standardized tests to be applied on the assessment targets, are created. Finally, the assessment is run.

Amazon Inspector handles the analysis and even generates its reports. Alternatively, findings from Amazon Inspector can also be retrieved via APIs. This allows users to generate and format their own summarized or detailed reports.

Some pros to using Amazon Inspector are AWS nativity, low cost (30 cents per AWS Agent per assessment), and it is a good option for analysis of infrastructure vulnerabilities. Some cons to using Amazon Inspector are the limitation to EC2 instances and some benchmarks only work for certain operating systems.


The fifth topic was “But Will It Compile in Space!?” by Ignaty-san. He was one of our Global Internship Program (GIP) participants. This topic is a look at the effects of space radiation on electronics. There are several major radiation sources in space, such as solar wind, Van Allen Belts, changes with solar weather, and cosmic rays.

Radiation in space is much harsher than radiation on Earth. At such harsh levels, radiation can cause several kinds of damage to electronics. It can induce single-event effects which result in data degradation, calculation or logic errors, and any number of malfunctions. It can also cause gradual component degradation, which results in certain components failing entirely.

There are some ways to mitigate the effects of space radiation on electronics. The classic solution is radiation hardening. This essentially means components are made from more durable materials, which is expensive. Other solutions consists of avoiding radiation belts, shielding electronic components, designing fault-tolerant software and utilizing redundant components.


The sixth topic was “The Sweets and Bitters of React Native” by Rachel-san. She was also one of our GIP participants. React Native is a framework for building native apps using React. The motivation behind it is the desire to write mobile apps with the same logic as web apps, while achieving native behavior, without sacrificing performance. It reuses React logic in app development, is a bridge to native APIs, and executes JavaScript on the background thread.

Some pros to using React Native are easy to pick up for web developers, provides shared logic and code base for iOS and Android, gets rid of heavy IDEs, provides hot reloading, and easy to combine with native codes. Some cons to using React Native are knowledge of mobile native platform is required, relies on third-party libraries and documentations, frequent release cycles, and many ongoing problems due to its relative immaturity.


The seventh topic was “TensorFlow - Machine Learning without PhD” by Dovile-san. She was also one of our GIP participants. TensorFlow is an open-source software library for machine intelligence. TensorFlow offers lots of speed with less computing power, uses data flow graphs for numerical computations ,and provides API for Java, C++, Python, and Go. Other TensorFlow-related features include TensorBoard for visualization and TensorFlow Research Cloud for computational resource.

Dovile-san demonstrated the usage of TensorFlow to build artificial neural networks. Given the MNIST database of handwritten digits, the task is to train a model to look at images and predict what digits they are. Using TensorFlow, she defined the number and shape of the layers of the neural network. She also specified the learning rule and error measure calculation.


As usual, we had a party afterwards :)