top of page

Ep 24 - Gareth McCumskey Solutions Architect at Serverless Inc.

Once again, welcome to the Talking Serverless Podcast. I am your host, Ryan Jones, and I am joined today by Mr. Gareth McCumskey. A Solutions Architect at Serverless Inc, Gareth is from Cape Town, South Africa. Being a web developer for more than 20 years, he spent the last four years building service applications, everything from breaking up a WordPress monolith into a set of serverless microservices to email parsers, to full blown crud applications. Garrett came from a more traditional PHP and Symphony web development background.

Introduction to Serverless Dashboard

Ryan Jones Q: How is everything going on in your life?

Gareth McCumskey: Besides the effect of the pandemic, events are being kept busy these days. We're currently moving with our dashboard UI, paving the way to an upgrade. The team is busy releasing a whole bunch of stuff with components. So, work is going into a big serverless framework, and it has kept me really busy.

Ryan Jones Q: What type of things are happening on the Serverless dashboard?

Gareth McCumskey: We have had the dashboard for just over a year, released out to folks to use it as a tool to help monitor and manage your serverless applications and deployments into AWS accounts. It has a bunch of excellent features. We realized that we needed a better UX overall, and we overhauled it. It is in line with how components work as well. There's a ton of features that we want to bring in with some of the elements that the old UI and UX wasn't geared. The folks who have been using serverless might be familiar with the dashboard, a dashboard of But we're busy finalizing stuff with the new version of The same features are provided in the back end with a different UI and UX on top to make things easier to use and understand. Hence, there's a lot of work in that. We have got a lot of content we produced over the last year with the previous UI that needs some massive update coming up; that's going to be a little fun to do. In general, we are helping folks get used to the new way of doing things.

Ryan Jones Q: Was this app thought giving you more input like a streamlined approach, or are there higher pushes to change it?

Gareth McCumskey: What was interesting is that once you can kind of sit back and see what people are using it for and how they're using things, you keep getting questions. An example of this is that folks were questioning about the use of AP and org. This is not an organizational setting that we applied in the dashboard. Folks could maintain some semblance of organization and other applications. Moreover, we try to simplify the whole concept between AP and org, making it less confusing. That's one of the interface pieces' drivers with less emphasis on the distinguishing characteristic between an org and the separate services within them to some degree. That's quite nice.

I have used certain features more than I have expected. For example, safeguards. We are removing Safeguards from the UI because we found there aren't quite as many people who needed to use it, essentially making this available as an open-source plugin. Folks who are still using safeguards can continue to use them and export their existing settings out of the dashboard by using them in their projects. We have already got some feedback that folks are happy that they don't mind managing themselves. It just simplifies our backend. A lot of other things like ci CD, which is a feature of the old dashboard, we want to provide means to integrate it with other VCs providers. Right now, we support GitHub. We're going add support for Bitbucket, as well. If you go now and look at setting CSV up, I think you'll see a little indicator that's coming soon. So, a lot of this has been done based on cleaning and then just trying to make things a lot simpler.

Journey to Serverless

Ryan Jones Q: How did you start? How did you get to serverless, and what is this transition look like?

Gareth McCumskey: It all started in 2016, and it was shortly after serverless framework with 1.0. I worked for an organization where they had this monolithic WordPress stack that was powering their entire e-commerce platform. It was a company that sold everything online; they weren't the traditional type. There were some serious problems. They have been ready for ten years on this platform. They've been running on it, but the volumes they were hoping to reach and just the growth they were expecting were being held back. So, I would never recommend somebody to rewrite an entire platform, as it's never a good idea. If you do, you have excellent reasons; you need to slow down. You can start considering what you're going to do about it. So, at that point, I was likely interested just as well.

I've been a PHP developer. I was looking into the usual frameworks familiar with trying to decide what to rebuild this platform. I was asked to take a look at servers by someone who was managing the AWS side. When I read about it, it seemed compelling, because the organization I was working with had two developers and the entire team. I feel comfortable setting up an EC two instance, and getting a Linux setup and a web server, and so on. I'm not confident that I'm the most educated in setting up securely, and the best possible performance and load balancing and all that kind of stuff. So, the promise of having or being able to deploy something into the cloud immediately was just too much to pass up. That's where I started sinking my teeth into serverless.

You never want to go into any new technology feed first and replace everything; immediately, you want to sink yourself in and figure things out slowly. So, the test for us was that we needed at that point, we needed to do a project, we replaced our review mechanism. With the way Google works in the review rankings that they show, you have to use a third-party reviews provider to do that. So, the company was switching from one to the other. This was the perfect opportunity to test out servers because we wanted to integrate into this review provider on our side. Hence, we decided to use servers to do this. You have a piece of productive work here. But at the same time, it's not critical to the running of the platform. If reviews went down, it's not great, but it's not the end of the world; people can still check out and pay the company money.

Ultimately, we ended up rebuilding the review portion of the application with service and including it into the platform as a widget on the pages loaded through JavaScript. Ultimately, that led us to begin re-architecture of the entire WordPress platform into a collection of microservices using the gem stack paradigm that is quite popular with servers these days. Yeah, and that's where it started.

Ryan Jones Q: How did you see things change after it caused a ripple effect across either service or your team when you had this success occur?

Gareth McCumskey: One of the fears wars that I remember is that we will run out of lenders. If you have a site that has a few 1000 Singleton's, you only have 3000 concurrent lambdas; you worry if you're still going to start hitting those limits, especially when you have multiple Lambda functions potentially running in parallel and asynchronously and so on. That question was answered very quickly. You have to be incredibly busy to hit those limits because of the way lambdas warm up. Once they warm, they can start executing swiftly. So, concurrency limit tends not to be a big issue unless you start hitting some massive user numbers.

Cold Sores was another question for us because we didn't have experienced the problem. The problem with being a developer and working with servers is that every single time you test something because you're writing code, you deploy your code, the lambda service resets the lambda function so that it's going to be on the next invocation and uses the code you just pushed. So, therefore, you experience a cold start every single time you test. Many developers start worrying about the impact of cold starts on the application when it goes into production. For us, that was the actual test. We can always roll back to the previous version and get it back up to speed and fast. Hence, we found out that this wasn't a big a problem. I've seen this repeatedly over the years now, and it ended up not being a big issue. It was also about getting experience, working towards the strangler pattern, which is what if you speak to many service people, as the way to transition from a more traditional stack to a server stack.

We're essentially picking a specific portion of your application, breaking it off in some way, and rerouting traffic from that portion of the application to your service side. A lot of the routing for the WordPress app was done through Claverack in our case. Hence, we can drop CloudFront paths in front of the casual path and send traffic to a serverless front end. Then the rest of the site will keep going to the WordPress side. So, that was the general strategy and it was great.

Ryan Jones Q: Is there a different way to develop a serverless?

Gareth McCumskey: Over time, AWS has gotten better at reducing cold starts, to begin with. So that's a trend you see a lot in serverless, and you don't have to do anything. AWS makes things better behind you, which is kind of a nice sort of free win, but generally helps us become an issue for applications that tend to be lower traffic volumes. So, if you have an application with high enough traffic volumes to continually keep your lambda functions executing, you're generally always going to have one function. So cold sores become less of an issue. That's the pattern that a lot of developers don't realize before you go into production. When you're developing, you're sending a couple of 100 requests a day. If you are a large team testing lambda functions, your likelihood of costs is a lot higher. As soon as you get into production, and you have a few 1000 to a few 100,000 or more a day, lambda functions stay warm. Those costs don't happen as much.

Thankfully, the technology keeps changing. So now AWS has introduced features like provision concurrency. If you end up in a situation where you see cold starts happening a lot more than you're comfortable with, you can pay to have lambda functions always available. Generally, I always recommend it as a last attempt at eliminating cold starts for you, because that will cost you money. Personally, with regards to the size of runtimes, I've not found that to have as much of an impact.

Opinion on PHP and Lambda Support

Ryan Jones Q: Do you think the support between PHP and lambda is feasible? Is that something that you would recommend?

Gareth McCumskey: There are a couple of ways to do a PHP and lambda. You could bring your runtime with lambda, and there's even a project called breath. If anybody's not familiar, we've worked with the breath project creator to help him get fast running with some of this framework. Thus, the source framework acts as a deployment mechanism underneath a PHP project, which handles that interface between the two.

For many PHP developers, the whole ecosystem is about using tools like Symphony, Laravel, and so on. You build your application in a framework and then deploy that into wherever you're going to be hosting them. If you are a traditional PHP dev using the frameworks, and you want to get some of the benefits out of using serverless, hosting things on lambda, then using a project like the breath is a great way to get going as it uses your existing knowledge and just push that into lambda. This is similar to what we've seen many folks doing with Express, for example, where you can build an express application and deploy it into a lambda, which isn't a wrong pattern. It's one of the reasons we now have and one of our components is the Express component.

You can bring your runtime as it supports the ability to run individual PHP lambda functions. So, it's quite useful. Instead of having a single API gateway endpoint pointing at a PHP monolithic stack, you can have an API gateway pointing at individual PHP, lambda functions. I particularly don't care what language anybody uses to build them the functions with, if it works for you, if you get the performance characteristics you want and if your development team knows the language. Ultimately, it doesn't matter to me.

Things like Java tradition have been moved away from the service world because of the cold stock times you often get with Java-based lambda functions. I'm seeing more Java-based lambda functions being deployed these days, mostly because teams realize that productivity can get out of using a language ecosystem. In my example, moving away from PHP to a node-based world was driven primarily because of that. I can't say that it's from my experience when you start sinking into serverless and start using many managed services available to you in the cloud, AWS. So that's why I've started classifying serverless into the sort of low code, no code scenario where you can build entire applications with a couple of dozen lines of code. It does things that typically takes a few 100 lines, which is compelling when you think about how most problems with applications these days into being in the code people. So, you'll probably find using service, and you end up writing less code anyway.

Serverless Components
Ryan Jones Q: What are serverless components, and what is it trying to solve?

Gareth McCumskey: The whole idea behind serverless to me is to get developers to put solutions out for the businesses that they work with. I find that many developers spend a lot of time trying to build really fancy solutions to elementary problems. They even take a lot of the time for the perfect codebase as they want to create something spectacular when the business, they're working with just wants them to solve the problem. They want customers to be satisfied. This simplifies a lot of things.

It takes away a lot of that infrastructure overhead that you often have to code for developers. There are many things to pick up and learn and understand before building a proper service application. Unfortunately, if it affects adoption to some degree, it affects folks' ability to solve the problems, and then folks have to spend a lot of time learning new stuff. The idea with components is to take the best ideas we've seen and implementations of just solving specific use cases and giving you an effortless way to get a solution out without worrying about even the underlying configuration of many services themselves.

Streaming v1 is incredibly flexible. It's good at letting you compose an application that solves a problem by combining all these managed services in the Cloud. But if you don't know what kinesis is, or what lambda is, what API gateway of Dynamo dB, anything in any of these other tools, they're not going to be of much use to you. Instead, what components do is it abstracts that even one step further says, do you want a website? Do you want to put HTML, CSS, and JavaScript into the Cloud and make it available publicly? Use the website component point at your static files that you've generated from reacting view or whatever the front-end framework you like, run the serverless deploy. If it goes into your AWS account, it creates the s3 bucket; it'll create the CloudFront distribution in front of it, it'll even hook up a domain if you've added that into the configuration file. Ultimately, you need a configuration file for something like the website component, a minimum of five to six lines, to get something up and run. It really doesn't need much; you need an AWS account and some files to push up.

You can even use crater, an index that HTML file in Notepad for crawl the website, component cares, and it's going to put that up for you make it available. That didn't take any knowledge of understanding all the infrastructure underneath. If you're a developer who wants to understand everything underneath, you can go ahead. But in the meantime, you are still solving problems. It's not just the website component. The whole point of this is that we want to build a collection of solutions to common issues that we see.

So, another problem that we see folks deploying Express applications into AWS using lambda. There are ways to do this so that some of this framework provides the tools to do this. You can use plugins to ease this configuration and setup, but it's a bit annoying to do this every time. Instead, we've got the Express component; you run serverless deploy, it deploys it into AWS account. Now you've got an express application, hosted in lambda, fronted by API gateway with all the right configuration that's needed. You can go ahead and build your Express application hosted in lambda now without having to worry about any of the underlying configurations as required. If you just want to solve the problem, you keep using Express as you've used in the past. And if you do, you can take the time and understand what's happening in your AWS account. But as you've solved the problem, it's done.

We are not just hoping to build components ourselves and make everybody use them. What we've done is that we've got a central component core that allows you to create a component on top of technologies that can help you do this. So, the idea is that I can build a component that can then do things in AWS. I can write my own components that can do these things to the point where we were even developing a registry to publish your components to reuse in the future. Others are of NPM registry style, where folks can build their components, publish them and make them available to others to use. We recently had one of our engineers just build the lambda component. Thus, now you can have an AWS lambda function, the code for a lambda function deployed into AWS as a job with three or four configuration lines. You don't have to do anything else. Just type serverless deploy and some code that you want to run in Chrome.

I wanted to touch on some of the other aspects of components, a reason for building them in the first place. That was one of starting points for us; we wanted to make the adoption of serverless easier. We also wanted to remove some of the difficulties that service in general still has, such as local development. It's such a difficult topic to talk about because local development is challenging to do with serverless. Because the whole idea behind serverless is that you're consuming managed services and you don't have code to execute. If you don't have code to execute, there's nothing to run locally; you're going to potentially have some small amount of glue code that you're going to need anyway to help automate some of the stuff.

We wanted to find a way to make local development feel better. The best way to build serverless applications with some of this framework is to deploy it into the Cloud repeatedly while developing it. You would be building something you deploy, you test; you edit, deploy your test, because ultimately, you need to test how your application runs in the Cloud. A lot of work is done to try and emulate local services from AWS. The same situation that we've always been in where it works on my machine. As soon as you put it into the environment, it just falls over because there's something that you didn't quite account for. So that's why pushing to the Cloud and testing there works better in most cases.

So, the components took us to different steps. One of the annoying experiences of testing in the Cloud with serverless is the amount of time you sort of waiting around for stuff to get into the Cloud. We ended up taking components to a point where we have a deployment engine for components sitting in AWS. So, we're used to this oldest framework running serverless, deploying on your local machine, and being your local machine building a cloud formation file and zipping it up and uploading it into s3.

Going through this process of deploying into the cloud and cloud formation managing state, the original version of components they released last year didn't use Cloud. It uses the AWS SDK, which is why it can be so fast. It's not waiting for CloudFormation to build entries, entities, and resources in AWS. The problem we found was state management and the deploy times were a little bit longer in that case. As soon as we moved that deployment engine into AWS itself, we found that we could do things like making AWS SDK calls internally in AWS, which is blindingly quick. When you're editing code, and it gets deployed into the Cloud, it happens in seconds so that that testing loop doesn't feel as strange anymore. We can put things in a demo.

For example, if you've deployed the Express component with the Express component's current version, you're making changes to that app.js file. You can watch your files in your project; you make an edit, you save the file, it instantly deploys that into the Cloud because that deployment happens in AWS already. You can test your endpoints in your browser and see your CLS log files in dev mode. You don't have to break out into Cloud watch to see those. You don't have to wait a minute or two for deployment to finish. This is something any components developer can do in state management as well.

The components core that we built allows you to develop a component that integrates into a dev mode feature that again integrates into state management so that your components can store in the Cloud. You have Central State management now, just like CloudFormation, and there is the dashboard that we've been switching away from UX wise. This supports our components infrastructure. You need some way to see how they perform, whether they're erroring out, and what's cool with this is that they're compact. You could pull out whatever metrics you need out of the resources you've deployed and send that to the application as a charter or graph. Hence, anybody who deploys your component and gets traffic on it can look in at the service account and see traffic. All these things work together to help improve the experience of developing the servers and provide a better experience for the folks. Moreover, all of our components are open source as well. So, if anybody does want to, for example, take the Express component and change it to fit their needs, they can just fork that project and pull that and then publish the brand-new component that they've created into the registry and continue to use it from there.


Ryan Jones Q: Do you have any advice for new people that are getting started with Serverless? How would you start today?

Gareth McCumskey: Well, my initial start, that sort of framework, has excellent documentation, which helped me get started to no small degree, just on slash docs. There is enough information there to sort of getting going and playing with things. At the same time, I have also been working on finishing a course, a free course that we put So, if you go to slash learn, there are two courses. The one that I'm still finishing has a few parts left. But there is more than enough for somebody new to serverless. We are also going to be ramping up on the educational side of components because I feel like components are a really great way to get somebody into serverless. You can do three or four lines of configuration, type, SLS deploy, and there you go. You've got a serverless thing in the cloud working and operational, and then take it from there to learn more as well, which is pretty cool. So really, I think the best place to start is to take a look at the docs in the Getting Started section and then into the Learn section.

I think the best documentation we've got right now is on the GitHub project for serverless components, which is slash serverless dash components. You will find all of our current components there; you can pick one to try out. One of the other exciting ways that we've been able to work with components is we've integrated creating components into our new Thus, if you make an, you'll be onboarded. You will be able to click and select what you want to deploy. If you're going to deploy a view website, you can click the View starter; it'll give you the instructions right on the screen. It will automatically build the component for you and automatically deploy it and start working with it. Therefore, if you want to get started with components, the easiest way is to go to and begin selecting the components.

Ryan Jones Q: How can people connect with you?

Gareth McCumskey: The way is on Twitter, and my Twitter handle is Ted Garrett MCC. I encourage anybody who's interested in chatting about services. You can ask any questions. Moreover, everybody's starting somewhere and everybody's still learning stuff in serverless. This is a very new field.

Ryan Jones: Awesome. To those listening, this has been the Talking Circles podcast with Ryan Jones. If you like our show and want to learn more, check out talking Please leave us a review on iTunes, Spotify, or Google Podcasts, and join us next time as we sit down with another fantastic Serverless guest!

bottom of page