Zoom security flaws and Chinese links make US authorities nervous

Zoom’s rise to fame might only be match by the fall from grace as security flaws and apparent ties to China are laid bare for all to see.

It was only last week Zoom CEO Eric Yuan had to pen a blog entry to calm fears over the video-conferencing service, but this additional post is to address statements from University of Toronto’s Citizen Lab. Zoom has rolled out its own encryption software to enhance security, though the Toronto researchers suggest there are ‘significant weaknesses’.

“We appreciate the questions we are getting and continue to work actively to address issues as we identify them,” said Yuan. “As video communications become more mainstream, users deserve to better understand how all these services work, including how the industry — Zoom and its peers – manages operations and provides services in China and around the world.”

Firstly, the Toronto researchers have questioned how effective the security features of Zoom actually are. On one hand, the encryption is not end-to-end by industry standards, despite the company claiming so, while the way in which it has been designed and implemented is also questioned.

“The Zoom transport protocol adds Zoom’s own encryption scheme to RTP in an unusual way,” the researchers state.

“By default, all participants’ audio and video in a Zoom meeting appears to be encrypted and decrypted with a single AES-128 key shared amongst the participants. The AES key appears to be generated and distributed to the meeting’s participants by Zoom servers. Zoom’s encryption and decryption use AES in ECB mode, which is well-understood to be a bad idea, because this mode of encryption preserves patterns in the input.”

These encryption keys could also be distributed through Chinese servers, which is a bad idea for anyone as companies can be legally compelled by the Government to hand over these keys. Zoom has said this oversight has been corrected and no international meetings will be routed through Chinese servers, but the damage may well have already been done.

When security and privacy in the digital economy are being discussed, it makes a tarnish on the record which can be very difficult to remove. Zoom has an incredibly long list for a company which continues to trade, but a link to China is one which is almost impossible to shake off. Especially when it comes to operating in the US.

Zoom is a company which is listed in the US on the NASDAQ, but the software appears to be developed by three companies in China, all known as Ruanshi Software, only two of which are owned by Zoom. The ownership of the third company, also known as American Cloud Video Software Technology, is unknown.

As it stands, 700 employees are currently in China, which is not unusual as it can save on salaries in comparison to the US, though it does open up the firm to pressure and influence from the Chinese Government. This is not a position which will make US authorities comfortable.

In New York, the Department of Education has banned all schools from using Zoom for remote learning, stating teachers will have Microsoft Teams functionality available as soon as possible. New York Attorney General Letitia James is also probing the privacy and security credentials of the company, a worrying sign for the business.

Security is a major component of the digital economy and Zoom just does not appear to be up to scratch. For every leak in the hull which is fixed, three more seem to emerge. The long list of security vulnerabilities was always going to catch up with the team, though it remains to be seen whether Eric Yuan can talk his way out of the apparent links to China, a potential death sentence in the US.

Automatic deployment from GitHub

I configured the server in such a way that after each commit to the master branch, the site is automatically generated.

On the GitHub side, I used a regular “WebHook”, and on the server side, I used Gith.

Gith is a handy web server for Node.JS that can accept and filter data from GitHub webhooks. My server that runs the build of the site looks like this:

var childProc = require (‘child_process’);

var path = require (‘path’);

gith ({

// Listen to hooks only for the “master” branch

branch: ‘master’

}). on (‘all’, function (payload) {

console.log (‘Run deply script on’, new Date ());

// Run the site build script

var deploy = childProc.spawn (‘sh’, [‘/web/deploy.sh’]);

deploy.stdout.on (‘data’, function (data) {

var message = data.toString (‘utf8’);

if (~ message.indexOf (‘subscribe’)) {

// Docpad may ask about subscribing to the newsletter, we will refuse

deploy.stdin.write (‘n’);

} else if (~ message.toLowerCase (). indexOf (‘privacy’)) {

// Docpad may ask about security policy, let’s agree

deploy.stdin.write (‘y’);



deploy.stderr.on (‘data’, function (data) {

console.log (‘Error:’, data.toString (‘utf8’));


deploy.on (‘exit’, function (code) {

console.log (‘Deploy complete with exit code’ + code);



The deploy.sh project build script itself looks like this:

#! /usr/bin/env bash
git pull
git submodule foreach 'git checkout master && git pull origin master'
npm install
docpad generate
find ./out -type f ( -name '*.html' -o -name '*.css' -o -name '*.js' )  -exec sh -c "gzip -7 -f < {} > {}.gz" ;

Debug mode

It often happens that a user of your site informs you that an error occurs in some browser: JavaScript does not work or elements have run over each other. But all your CSS and JS code is minified and it is quite difficult for you to find the very place in the source files where this error occurs.

In the future, these problems can be found using Source Maps, but now not all minifiers and browsers support them.

The docpad-plugin-frontend plugin has a special debug mode. Since the structure of all minified files is stored in a JSON directory, it will not be difficult for us to list the source files instead of the compiled one if we need to.

To do this, in DocPad, I create a separate environment, in which I specify the frontendDebug: true option. If the frontendDebug option is true, then the assets () method of the docpad-plugin-frontend plugin will, if possible, return a list of source files instead of minified ones. Example for configuring docpad.coffee:

module.exports = {



frontendDebug: true


Now when you run DocPad in a debug environment, you will get HTML pages with source CSS and JS files and you can easily find the error:

docpad run --env=debug

CSS and JS resource management

Very often there is a need to manage the connection of CSS and JS files on various pages of the site. Let’s say all pages need to use the set1 fileset; for all internal pages of the / about / section, you must additionally use set2 and set3, but for the / about / contacts / page, you must use set4 instead of set2 (that is, set1, set4, set3, in that order). In addition, the URL of all resources must be substituted with the file modification date in order to effectively flush the cache.

To solve these problems, the docpad-plugin-frontend plugin was written. It adds the assets (prefix) method, which allows you to get a sorted list of resources from the current document and the entire chain of templates applied to the document. If a .build-catalog.json file exists in the project root folder, the plugin reads it and returns a list of resources prefixed with the file modification date.

For example, the problem described above with managing resource sets can be solved as follows. For the main template default.html.eco, we specify the main set of files in the meta data:

js: "/js/fileA.js"
In the template about.html.eco, which inherits from the main template and applies to all documents / about / *, we specify the following data: 
layout: default 
js2: ["/js/fileB.js", "/js/fileC.js"] 
js3: ["/js/fileD.js", "/js/fileE.js"]
In the document /about/contacts/index.html, we overlap the js2 set: 
layout: about
 js2: "/js/contacts.js" 
Now, when rendering the page /about/contacts/index.html, calling assets ('js') will return the following set of files: 

Build front-end resources

For ease of development, I split CSS and JS files into separate modules, which are then glued and minified – this is a standard practice for high-performance sites. To build, I use Grunt.js, which, it would seem, already has all the necessary tools to perform these tasks.

But even here I did not find anything suitable. The fact is that the date of the last update of the minified file is important to me, because I want to substitute it in the file URL to effectively reset the cache. Therefore, you need to update the target file only when one of the source files has changed.

To solve this problem, I wrote my own collector: grunt-frontend. It works as follows. During concatenation and minification of several files into one, it writes the structure of the source files and their md5 fingerprint to a special file .build-catalog.json. At the next build, the plugin looks at the structure and content of the source files: if nothing has changed, then the final file is not minified or updated.

This not only reduces build time, but also allows you to save important data of the final file such as the update date and md5 fingerprint. All this data is stored in .build-catalog.json, it is desirable to store it outside version control.

For minification, the CSSO libraries (with automatic inlining of all files connected via @import) and UglifyJS are used.