I joined the Operational Cyber team. This role involves vulnerability research — reverse engineering and 0-day research — and high-assurance software engineering.
Shopify is a multi-billion-dollar e-commerce company, providing online stores, payments, marketing, point-of-sale systems and more.
I joined Shopify via the acquihire of eporta. By the time I left, they still hadn't worked out what to do with all the eporta software engineers they had acquired.
While at eporta, I worked a 4-day week and used the 5th day to study part-time for a university degree. Shopify were not willing to let me work less than 5 days per week.
I did not want to cause problems during the acquisition so I accepted Shopify's terms. However, I made clear to my CTO and CEO, and to Shopify HR, that I was not willing to give up my university degree halfway through it. I really needed a day a week for studying.
Shopify insisted that I couldn't work less than 5 days per week, so I looked for an employer who would let me finish my degree.
eporta provided an online B2B marketplace and shops for the interior design industry.
The marketplace was the original product, implemented using Django. The online shops were the result of a pandemic-induced pivot and were built using Node.js and Serverless for a backend API with Next.js for the frontend.
Though not the most technically-exciting work, eporta was an excellent company for product development. We worked in small product teams: each team contained software engineers, designers, and product managers. We had regular contact with our customers, including face-to-face sessions most weeks, and very fast development iterations. The whole company worked together on product discovery, using opportunity solution trees. I learned what MVP really means!
Gower Street is a data analytics company in the film industry. Their primary product is a simulation of the global box office, which their analysts use to predict how much revenue films will make in different markets.
The software team performed four main tasks:
Our services ran using Docker Swarm on some AWS EC2 nodes, all managed with Terraform.
We used a variety of different programming languages, mainly for legacy reasons, including Clojure for the web app, Go for ETL, and Python for data science models.
When every cinema on the planet closed in 2020 due to the COVID‑19 pandemic, our CEO told us that the company had no money and could not afford to pay us for the month we had just worked.
I worked in a robotics research team, implementing computer vision and machine learning algorithms on heterogeneous embedded processors.
My main role was to take research prototypes, which were typically developed on high-powered desktops or laptops, often in a high-level language such as MATLAB, and implement them on a low-powered embedded processor of some sort.
As part of this role, I evaluated potential processors and hardware platforms. I also acted as a liaison with PhD students at the Dyson Robotics Lab at Imperial College.
It became apparent that my role as a software engineer within "upstream research" was superfluous: my researcher colleagues were, quite reasonably, not willing to be constrained by practical engineering requirements when working on proof-of-concept prototypes.
Unfortunately, politics within the company prevented the development of anything beyond a proof-of-concept: the "upstream research" team was entirely separate from the "product development" team and the relationship between the two was not collaborative.
After some time trying to find a solution to the political issues without success, I burned out.
I worked on cross domain products, which are highly-secure network gateways and firewalls. I worked on both classified projects for the UK government and on a commercial product.
The commercial product was IndustrialProtect, a system to allow secure networking for industrial control systems. This was used to protect power plants, oil refineries, and similar Critical National Infrastructure from cyber-attacks.
The work involved writing networking components, mostly in C++, that ran on custom hardware with stringent reliability and performance constraints.
I worked on the Dyson 360 Eye robotic vacuum cleaner. This contained an OMAP 3 processor running Linux, with most of the robot's behaviour controlled by a C++ application.
One of my main contributions was performance optimization, which included:
However, I also worked on other parts of the application:
Maxeler designed and sold hardware accelerators: PCIe cards with one or two FPGAs and a lot of DRAM, which were used to perform massively-parallel numerical computations very quickly. We were a small startup — I think I was the 7th employee — and our main competitors were NVIDIA, who were trying to do the same thing with GPU-based accelerators, and Intel, who were trying to convince people to stick with CPUs (while also experimenting with the Xeon Phi). In the end, we lost — GPUs are the standard in High-Performance Computing today — but it was an exciting time and we came pretty close to redefining how HPC is done!
My primary role was as an applications engineer, which entailed profiling applications, identifying candidates for acceleration, porting them to our hardware platform, and optimizing the result. However, we were a small company so we all did at least some work at every level of the stack: I also worked on our compiler, wrote the first version of our runtime, and developed our kernel driver. At one point, I spent some time sanding heat sinks.
After 18 months, I moved from London to San Francisco to help set up our new Californian office. I worked on-site with customers, embedded in their own teams, and worked remotely with the London office.
These are some publications related to my work:
I worked in the International Investment Management department, collecting data on European stock markets. This mostly involved writing Perl scripts to munge data from a variety of poorly defined and variable input formats then load it into FactSet's proprietary time-series database. Some of the data processors, and the database itself, were implemented in C++.
Most work took place on Alpha or Itanium mainframes running OpenVMS. I also worked on an experimental project to port the company's infrastructure to Linux running on x86-64 machines. I worked on the build system, especially dealing with Perl XS compilation and the dynamic linker.
While at this company, I also joined the London Perl Mongers, which was my first experience with a programming language community.
I did not find this role technically challenging. All the really interesting technical work happened at the company headquarters in the USA: our satellite office in London just handled ETL duties for European financial data.
I had to badger managers constantly to get a few scraps of interesting work on the OpenVMS-to-Linux porting project but even then it was almost all done in the USA. The people were very friendly and I had an excellent manager (who later went on to better things) but the work was just boring.
While at university, I worked part-time for a web development and hosting company. During term time, I worked remotely as a sysadmin for their Linux servers with particular responsibility for the mail servers. I was also on-call to visit the data centre and fix any problems that required an engineer to be physically present.
During the summer holidays, I worked in the office developing websites for customers as well as tooling to automate hosting administration tasks. I built a web-based control panel that used Python and XML‑RPC to administer the web, email, and DNS servers.
I took part in Imperial's Undergraduate Research Opportunities Programme in 2005: I worked on a project to design an FPGA implementation of the algorithm used for the TOP500 supercomputer league table. I analyzed and profiled the LINPACK benchmarks and wrote parallelized versions in Handel-C.
For my final undergraduate project, I investigated the representation of continuations and control flow in X, a novel logical calculus.