読者です 読者をやめる 読者になる 読者になる



Designing APIs with Goa

Like most of the web services in the current software development trends, the service that our team is currently building is also based on the idea of microservice architecture using the Go programming language. Right now we have around 10, in theory, independently deployable services and these services are often communicating with each other using RESTful APIs with over 100 of HTTP endpoints. Also, we have a feeling that these numbers will keep increasing over time.


32nd Monthly Technical Session (MTS) Report

32nd Monthly Technical Session (MTS) was held on March 17th. MTS is a knowledge sharing event, in which HDE members present some topics and have QA sessions, both in English.

The moderator of the 32nd MTS was Kevin-san.


The first topic was ‘Microsoft Azure Overview’ by Mami Konishi-san and Drew Robinsons-san from Microsoft.

Konishi-san explained the current momentum of Microsoft Azure. These days, 90% of Fortune 500 companies use Microsoft Cloud. Each month, there are 120,000 new Azure customer subscriptions. Furthermore, there are currently 34 available Azure regions, with 4 others coming soon.


Robinsons-san demonstrated how to utilize Microsoft Azure to create an Ubuntu server. This task can be accomplished in several ways, the first of which is to utilize Azure portal. For people who prefer to work with terminals, Microsoft offers Bash on Windows and Azure CLI 2.0. Using these tools in combination with Azure Resource Manager gets the job done, but Robinsons-san introduced yet another approach. Besides running commands in the CLI, Azure Resource Manager also provides Azure Quickstart Templates, with which deploying Azure resources becomes simpler. There are quite a lot of templates already available on the GitHub repository.

He also explained other Azure services, such as DocumentDB, Azure Container Service, and Service Fabric. DocumentDB is Azure’s NoSQL service. By turning on protocol support for MongoDB, DocumentDB databases can even be used as the data store for apps written for MongoDB. Azure Container Service enables deployment and management of container-based applications on Microsoft Azure. Service Fabric is Azure’s microservices platform. Robinsons-san mentioned some of its features, namely its support of both stateless and stateful microservices, its tools for Visual Studio, and local cluster.


The second topic was ‘Introduction to Swagger for Amazon API Gateway’ by Furukawa-san. Swagger is a framework for designing, building, and documenting RESTful APIs. Swagger consists of the specification (currently known as OpenAPI Specification) and the tools that support it.

Amazon API Gateway allows us to export APIs as Swagger. We can then update the exported API definitions using tools such as Swagger Editor. Finally, we can import the updated API definitions back to API Gateway. By default, imports merge updates to the existing API definitions. However, imports can also be configured to overwrite existing API definitions. In utilizing Swagger for API Gateway, Furukawa-san was concerned about how Swagger supports environment-specific configurations (e.g. AWS account IDs and Lambda function ARNs) and certain aspects of API Gateway (e.g. API keys, usage plans, and custom domains).


The third topic was ‘How to Make a Secure Web Application’ by Okubo-san. He had wanted to talk about this because of the change in HDE’s business environment and ISMS utilisation. He also mentioned past incidents that happened to other companies, such as YouTube (2010), Sony (2011), and Twitter (2014).

Okubo-san were particularly concerned about several aspects that affected security, which are programming rules, framework, vulnerability-checking tools, and third-party verification. He recommended us to make programming rules based on a certain standard, such as this document by Information Technology Promotion Agency. Because human error happens often, he suggested us to not implement validators individually. Instead, those validators should be included in a framework that everybody uses. Identification of some vulnerabilities can be automated by using tools such as OWASP Zed Attack Proxy Project. Finally, we should also consider doing third-party verification, because it may help us discover vulnerabilities that we didn’t realise before.


The fourth topic was ‘Designing APIs in Go’ by Shihan-san. The HDE service he’s working on consists of 10 microservices with no less than 130 HTTP endpoints. Furthermore, in maintaining the APIs, the team needs to modify handler functions, update the client codes, update the API docs, the list goes on. This led him to Goa, a Framework for building microservices and REST APIs in Go. Goa generates a lot of things automatically; such as boilerplate, glue code, documentation (JSON schema / Swagger), CLI, JavaScript Library, Go Client, and others.

To him, the benefits of utilizing Goa are we can focus on writing codes that matter, Goa generates the boring stuff as idiomatic Go codes, it also helps us completing documentation, and Goa has a friendly community surrounding it. On the other hand, he was still not sure about how to utilize Goa in several aspects of his work, namely things related to JSON Web Tokens (JWT) and role-based access control (RBAC).


The fifth topic was ‘Data Science & Python’ by Aji-san. He is one of our Global Internship Program (GIP) participants. In very simple terms, data science is about extracting knowledge from data. This extraction process usually involves data collection, preparation (e.g. cleaning and transformation), and manipulation. Afterwards, we can obtain hypotheses from the data, namely by examining what affects the attributes and the relationship between attributes.

In regards to Python, there are a quite a lot of data science libraries available. Aji-san explained some of them, such as NumPy, pandas, and Seaborn. With NumPy, we can utilize N-dimensional arrays, among other things. Pandas is a data structure and analysis tool. Seaborn allows us to create better-looking plots. He felt that, compared to R, Python is easier to read, can be combined with other platforms/domains, and has more and better visualization choices. On the other hand, regarding data science, it is easier to find answers about R, since it’s more mature than Python in that aspect.


The sixth topic was ‘Introduction to systemd’ by Chiachun-san. He is the other one of our GIP participants. init is the first process started during booting of a computer system (PID = 1). Most of Linux distributions used SysV-style init (SysVinit). However, it is too old and has many drawbacks, one of them is that it starts tasks serially.

This is where systemd comes in. It is a system and service manager for Linux operating systems, and a full replacement of SysVinit. Chiachun-san highlighted some of its features, such as service dependencies definition, its own logging system, and service identification by utilizing cgroups. It was released in 2010, and today most of Linux distributions have switched to systemd as the default init system.

He also explained some systemd commands, systemctl and journalctl systemctl is used to control the systemd system and service manager. journalctl is used to query the systemd journal (its own logging system). He also showed some usage examples of systemctl and journalctl to do certain tasks.


The seventh topic was ‘PHP 7.1 is Fast (?)’ by Uzulla-san from builderscon. He specializes in PHP, has won best speaker award from many tech conferences, and is an author as well. He is also an organizer of tech conferences, one of which is builderscon.

PHP 7.1 was released on last December, and Uzulla-san was intrigued by the impression that it is fast. He then took it upon himself to benchmark the new version of PHP. In doing so, he utilized tools such as ab, httperf, and wrk. He said he wasn’t successful in achieving higher performances than existing benchmarks, but he’s pretty satisfied with his results.


As usual, we had a party afterwards :)

31st Monthly Technical Session (MTS) Report

31st Monthly Technical Session (MTS) was held on February 24th. MTS is a knowledge sharing event, in which HDE members present some topics and have QA sessions, both in English.


The moderator of the 31st MTS was Jeffrey-san.


The first topic was ‘Introduction of EBS’ New Feature' by Nagira-san. Attaching additional EBS volumes had always been hard for him, because he had had to stop the EC2 instance that the EBS volume was attached to. Fortunately, AWS provided an update which allows him to to increase volume size, adjust performance, or change the volume type while the volume is in use. In other words, now he doesn’t have to stop any EC2 instance when attaching additional EBS volumes. This makes his work easier and reduces downtime. Nagira-san also explained in detail the new way to attach additional EBS values.


The second topic was an explanation of an HDE service’s mobile UI, by Kevin-san. He began by reintroducing the service itself. Then, he talked about the technology stack, particularly about why he had chosen Riot as the Javascript framework. The reason was because it’s very small, component-based, has simple APIs, and its community is small but friendly. He proceeded by teaching us about the components of the mobile UI and what’s actually happening under the hood. He wrapped the session up by explaining his work’s performance, past challenges, and future works.


The third topic was ‘Backing Up DynamoDB Tables’ by Bagus. He began by talking about the recent events that made him look into the issue further. One strategy we can use to backup DynamoDB tables is to utilise AWS Data Pipeline. Data Pipeline is a service which allows us to automate the movement and transformation of data. Creating pipelines to backup DynamoDB tables is simple, because we can use templates. Via AWS console, we can easily create a pipeline that exports DynamoDB table to S3 and another pipeline that imports DynamoDB backup data from S3.


The fourth topic was ‘Things I Learned from IT Admins in Taiwan’ by Nakakomi-san. For almost half of last year, he was working in Taiwan. During that time, he noticed some differences between Japan and Taiwan in several aspects, such as employment culture, decision making, and concern of IT admins. He related these differences to HDE’s goal of becoming a world-class IT company. To achieve that goal, we need to think globally. Each country has their own culture, which affects the way the people think, which affects the solution that they need. By knowing more about other countries, we can adjust our solutions to suit their needs.


The fifth topic was ‘Introduction to Video Encoding’ by Michael-san. He began by explaining the terminologies related to video encoding, such as containers, codec, compression, and encoder. He continued by teaching us about compression and re-encoding effect. There’s always a tradeoff between file size and quality. According to Michael-san, the codec used and encoding parameters affects the result a lot. He also answered some frequently-asked questions regarding video encoding. He wrapped the session up by introducing a software he prefers to encode videos, FFmpeg.


The sixth topic was an in-depth explanation of an HDE service by Xudong-san. He began by explaining the feature he was working on and its requirements. There had been a previous implementation of the feature, but Xudong-san was requested to redesign it. The original design utilised EC2, while the redesign was to utilise AWS Lambda. The reason was, unlike EC2 instances, Lambda functions needs no maintenance. Furthermore, Lambda functions can also be cheaper than EC2 instances.

However, utilising Lambda functions also had its own problems. This was mainly because he was utilising Lambda functions for I/O-bound actions. He created a new design to solve this issue. This new design utilised two types of Lambda functions, consumer and controller. Controller functions invokes consumer functions, and consumer functions can also invoke other consumer functions. He also highlighted the limitations of AWS that he needed to consider during his implementation.


The seventh topic was ‘Airport Baggage Handling System’ by Kelvin-san. He is our current Global Internship Program (GIP) participant. Before his internship at HDE, he had had another internship, during which he worked for projects related to baggage tracking system at HKIA. Baggage handling system manages activities such as checking in baggage to unit load devices, transferring baggages to other flights, and baggage claiming. A good baggage handling system is reliable, handles numerous baggages at the same time, minimises transfer time, and is completely automatic.

Apparently, baggage mishandling is one of the most common issues in airports today. Among the usual causes of baggage mishandling are human error and damaged tags. Furthermore, almost half of lost luggage is due to transfer-related incidents. Kelvin-san explained one of the solutions to baggage mishandling, which is automatic RFID baggage tracking system. According to Kelvin-san, RFID has higher successful reading rate compared to barcode. Towards the end of the session, he explained various components of airport baggage handling system by describing their pictures.


The day of the 31st MTS was also the last day of Kelvin-san’s internship. We had a small event for him and gave him some souvenirs. In turn, he shared his impressions of his time working with us. Thank you very much for your contribution, Kelvin-san!


As usual, we had a party afterwards :)


30th Monthly Technical Session (MTS) Report

30th Monthly Technical Session (MTS) was held on January 27th. MTS is a knowledge sharing event, in which HDE members present some topics and have QA sessions, both in English.


The moderator of the 30th MTS was Jonas.


The first topic was ‘Introducing Yahoo! Pulsar: a Distributed Pub-Sub Messaging System’ by Okubo-san. Pub-Sub stands for publish-subscribe, a messaging pattern in software architecture. In this pattern, publishers don’t send messages directly to subscribers. According to Okubo-san, communication in pub-sub happens by utilising topics. Publishers send messages of certain topics, and subscribers receives messages of certain topics.

Pulsar is a distributed pub-sub messaging platform which is developed by Yahoo!. They have been using Pulsar since 2015, and recently the platform was made open source. Pulsar offers scalability, low latency, strong ordering and consistency guarantees, cloud service oriented design, and geo replication. Okubo-san continued by explaining Pulsar’s architecture and comparing it to other messaging platforms.


The second topic was a re-explanation of an internal service, by Matsuura-san. This time, the focus was how he solved a particular problem. In this service, there are AWS Lambda functions which access EC2 instances. Initially, Fabric was utilised by the Lambda functions to execute shell commands in the EC2 instances. However, using Fabric caused some problems, such as KeyboardInterrupt or Operation not permitted.

To solve this, he used Paramiko instead of Fabric. Using Paramiko is a bit more complicated than using Fabric though, as he needed to compile it on Amazon Linux, then include it in a Lambda function’s deployment package.


The third topic was an in-depth explanation of an HDE service, by Ogawa-san. He began by explaining one of the backend components of the service. Then, he continued by teaching us how he solved a problem regarding filename encoding of ZIP file format. According to Ogawa-san, we can do this in two ways. The first one is to utilise language encoding flag (general purpose bit 11). The second one is to utilise Info-ZIP’s unicode path extra field.


The fourth topic was an explanation of an HDE service’s new feature, by Iskandar-san. He began by briefly reintroducing the service itself. Then, he immediately proceeded to teach us everything about the new feature, such as its concept, its architecture, its development process, and other technical details. He also talked about the changes the new feature will bring, such as additions to the user interface and new use cases. He had even made a video which explains how the new feature will actually work once it is released.


The fifth topic was about Bitcoin and blockchain), by Kelvin-san. He is our current Global Internship Program (GIP) participant. He began by explaining what Bitcoin is, particularly by comparing it to the existing financial system. Then, he taught us blockchain, a database that serves as Bitcoin’s ledger. Blockchain is a distributed database maintaining a continuously-growing list of ordered records (blocks). Once a block is added to the chain, it cannot be modified anymore. Furthermore, since blockchain is distributed, every node in the network has the same blockchain. According to Kelvin-san, these specifications make blockchain secure.


As usual, we had a party afterwards :)

29th Monthly Technical Session (MTS) Report

29th Monthly Technical Session (MTS) was held on December 16th. MTS is a knowledge sharing event, in which HDE members present some topics and have QA sessions, both in English.


The moderator of the 29th MTS was Shihan.

The first topic was 'Global Internship Program (GIP) Annual Report 2016' by Yuri-san. She explained all that she had done in the past year, showed how much we had grown compared to the year before, and shared the lessons she learned along the way. She also talked about the future, specifically about the strategies she would like to try in the next year.


The second topic was 'Play with Favicon' by Shinohara-san. Favicons are websites' icons. We can see it all over our browsers, usually in tabs, bookmarks, address bar, history, etc. Shinohara-san explained how he implemented the Favicon of one of our services. He also showed some examples of dynamic favicons, one of which was a fully-functional Tetris game!


The third topic was a report of re:Invent 2016 by Arakawa-san and Okubo-san. As you may have already known, re:Invent is Amazon Web Services' global customer and partner conference, which is held annually. Arakawa-san explained in detail the session that impressed him the most, Tuesday Night Live with James Hamilton. He also talked about re:Invent Central, in which technologies developed by companies that sponsored re:Invent are exhibited.


On the other hand, Okubo-san shared about networking, experiences, and learning he had during the event. He told us how he got to exchange lots of business cards. He also showed some pictures of meals, venues, and learning sessions. The session that impressed him the most was How Netflix Achieves Email Delivery at Global Scale with Amazon SES.


The fourth topic was presented by David-san. He was one of our Global Internship Program (GIP) participants. He shared lots of lessons he had learned from his working experience so far. He specifically focused on the project management approaches he had tried before, such as stand up meetings, GitHub issues, and others. For each of those approaches, he explained both its advantages and disadvantages, and what he thought about it.


The fifth topic was 'An Introduction to WebRTC' by Alice-san. She was the other one of our GIP participants. WebRTC is the union of standards, protocols, and APIs which enables real-time communication between browsers. Advantages of WebRTC comes from its security, speed, voice and video engines, and the fact that it is open source and patent-free. She also explained WebRTC protocol stack and browser support.


The day of 29th MTS was also the last day of David-san's and Alice-san's internships. So, we had a small event for them, in which they got some souvenirs and shared their impression of the seven-week internship. Thank you very much for your good work, David-san and Alice-san! We're happy to have you with us.


As usual, we had a party afterwards.


28th Monthly Technical Session (MTS) Report

28th Monthly Technical Session (MTS) was held on November 18th. MTS is a knowledge sharing event, in which HDE members present some topics and have QA sessions, both in English.


The moderator of 28th MTS was Bagus.


The first topic was a workshop of an HDE service, by Hayashi-san. He began by explaining what the service is and the motivation behind its development. So that other members can help test the service, Hayashi-san demonstrated how to install and use the service. He also explained how to uninstall the service. To close the presentation, Hayashi-san asked the audience to contact him if they find some problems in the service or if they have some ideas about improving the service.


The second topic was a comparison between Amazon Simple Email Service (SES) and an HDE service, by Okubo-san. According to his observation, SES and the HDE service differs in several aspects, which are verifying from addresses, sending emails, moving out of sandbox, handling bounced emails, and suppression list. Okubo-san concluded that SES is a good framework for transactional emails. On the other hand, the HDE service is not a framework, but it's able to send various types of emails.


The third topic was the logging and monitoring aspect of an HDE Service, by Jeffrey-san. This was a continuation of his presentation in the 25th MTS. Jeffrey-san began by explaining the logging approach that is used in his project. He spent more time explaining monitoring, because there are some issues related to it that he had to resolve. He described each issue and its solutions in detail. Because there was still time, he ended the presentation by explaining an automated task related to monitoring.


The fourth topic was task automation using Microsoft Azure, by Imaizumi-san. The tasks he intended to automate are the ones related to deploying HDE One services. Imaizumi-san began by explaining the motivation behind implementing this solution. He then showed the system architecture and demonstrated how the solution would work. He ended the presentation by explaining the relationship of his solution with other deployment task automation solutions in HDE.


The fifth topic was the internals of an HDE service, by Tanabe-san. Because there were lots of technical details to be explained, this presentation was longer than the others. Tanabe-san began by comparing this service with its predecessor. He then introduced the members of the project, and explained what each of the members is working on. He continued by explaining the system architecture and key concepts of the design process. Then, he explained each component of the system in detail. After that, he talked about release, deployment, and monitoring of the service. He ended the presentation by addressing lessons learned and future works.


The sixth topic was 'Genetics 2.0' by Alice-san. She is one of our current Global Internship Program (GIP) participants. Alice-san has an undergraduate degree in biomedical science and has 4 years worth of work experience in the field of molecular biology. She talked about applications of computer science in genetics. First, there is Cello, with which users can generate DNA sequences that describe logic functions for control of gene expression in bacteria. Second, machine learning is sometimes used in researches, such as to model gene/protein interactions and identify genetic risk factors.


The last topic was 'Wrap those Naked Variables for Good' by David-san. He is the other one of our current GIP participants. Naked values are values that may sometimes be null. These values are quite troublesome, because functions may not tell that they return such values, they cause Null Pointer Exceptions, and they make codes full of if not null statements. David-san explained a pattern that helps handling this, which is Maybe Box. It tells us that a function might return a value. Just(a) is a box containing a value, while Nothing is an empty box. To handle both cases, we can use map. With this, functions will only be applied to Just(a), never to Nothing.


As usual, we had a party afterwards.




先日、社内の技術勉強会で、なりすましメールを防ぐための技術 DMARC についてお話させて頂きました。 今回はこの話をまとめてブログ記事にしたいと思います。


DMARC(Domain-based Message Authentication, Reporting & Conformance)は、 送信ドメイン認証技術であるSPFとDKIMをもちいてなりすましメールの判定を行います。 メール受信者がなりすましメールを受信した場合に、これをドメイン所有者に通知する仕組みと、 なりすましメールをどのように取り扱うべきかをドメイン所有者が宣言する仕組みを提供します。




  • Aggregate Reports : メール送信者のドメインから受信したメール数と認証結果のサマリーレポートです。

  • Failure Reports : 送信ドメイン認証が失敗した場合、リアルタイムに報告されます。 送信IPや、メッセージIDなど認証に失敗したメールを調査するために必要な情報が含まれます。


メール送信者は、SPFとDKIMを用いて自身が送信したメールをメール受信者に認証してもらいます。 認証に失敗した場合、メールをどのように取り扱ってほしいかを3つのポリシーのいずれかで宣言することができます。

  • none : 何もしない。

  • quarantine : 隔離する(迷惑メールフォルダに入れる)。

  • reject : 受信しない(SMTPエラーとする)。


企業ドメインでは複数のシステムからメールを送信していることが考えられ、 送信ドメイン認証に未対応のシステムがあったり、 DKIMキーやSPFが適切に公開されていないなどを理由とした認証エラーが発生するリスクがあります。


  1. まず最初にDKIMとSPFを導入します。

  2. Gmail や Yahoo メールなどでメールを受信して認証が成功することを確認します。

  3. ポリシーを"none"として、DMARCレコードを公開し、Aggregate Reports を受信します。

  4. Aggregate Reports を分析して全てのメール送信先ドメインの認証結果を確認します。

  5. DKIMとSPFの運用が適切であることが確認できたら、DMARCポリシーを"none"から"quarantine", "reject"へ変更します。


米国では、2007年に PayPal が DMARC を導入し、なりすましメールを激減させることができたというレポートがあります。

DMARCのゴールは、"reject"を宣言してなりすましメールをユーザーのメールボックスに届けないことですが、 Aggregate Reports を受信し、企業ドメインに対する送信ドメイン認証の対応状況を分析するなど、 メールの到達性を改善するための有効なツールとして利用できるかと思います。

また、Failure Reports を受信することで、 認証エラーの発生を検知し、原因の調査と改善を図るなど送信ドメイン認証の運用を適切に維持できると考えます。


DMARCを導入するためには、まず送信ドメイン認証(SPF,DKIM)を導入する必要があります。 私どもが提供するメール配信サービス Customers Mail Cloud は、 既存のメールサーバーにメールリレーの設定をするだけで利用することができ、DKIMとSPFに対応することができます。

また、DMARC導入に関するご相談も承っております。 興味がありましたら問い合わせフォームよりお気軽にご相談ください。