Ep 37: Bob Wall CEO of Commandeer
Welcome to the Talking Serverless Podcast. I am your host, Ryan Jones, and I am joined today by Mr. Bob Wall, CEO of Commandeer, a product aimed at increasing developer productivity by centralizing the developer experience and tools into a single desktop application. A talented and creative developer, Bob has his new product Commandeer popping up in the serverless space.
Introduction to Commandeer
Ryan Jones Q: What is new in 2020, and how is the Commandeer team doing?
Bob Wall: 2020 has been interesting. We moved from Los Angeles to South Carolina in September of last year. But later on, we moved into a house that we purchased for about six weeks. I have two young ones, and 2020 has been very long. I feel like things are pretty back to normal here, and hopefully, that continues.
Ryan Jones Q: What is Commandeer, and what does it solve?
Bob Wall: I've been doing serverless technology for like four or five years now. So, I believe it was two years to large-scale products built before Commandeer. The concept of serverless technology is really fascinating because it's amazing and works on Event-Driven Programming if you go deep into it. We are continuing to build things. What would happen is that we would whiteboard an idea, and we'll have this cron process run and throw some items onto the queue. Then the queue will have a lambda that processes that, and taking that to the actual fruition is a very long process. Moreover, it requires pretty highly skilled developers to execute it. You are just visualizing the system working in your mind, which is where Commandeer came to fruition.
So, it's heavily focused on AWS right now. There is a system diagram, s3, connecting to a lambda, connecting to cloud watch logs, connecting to alarms, being able to see it at a zoomed-out picture, but of your existing system, not like a schematic that you drew and then got out of date. You connect to your system in Commandeer. You are looking at it from there. When we were doing things like all serverless development, you save data to DynamoDB. You have streams on the back end with lambdas that do something with that data. So, maybe send out an email to somebody or save it to an s3, data lake, or things like that. The problem was that, like on Dynamo, you have to either use AWS console, website, or low if you're running it, like, via the local stack, you had to connect with some small open-source tool that is not supported anymore. The premise of Commandeer is to unify so that you can see your Dynamo data, your s3 data, your Athena data, and on and on, all from like one central location.
More Discussion on Commandeer
Ryan Jones Q: Was there any other outside influences that led to founding Commandeer?
Bob Wall: I think two good examples are Visual Studio. So, the Visual Studio Code and Xcode being these push-button IDEs, while it's one of those reasons behind node becoming so amazing. They were abstracted away from the IDE where the aha moment node was just like in three lines of code from the command line. I'm running a server that used to be like Apache, or MAMP, or IIS. But the whole basis of Visual Studio and Xcode is that you write your code, and then you press a play button, and then it happens. I think that's one thing that is just missing in the cloud right now, and there are all these amazing managed services now on AWS, SendGrid, and Twilio and auth, zero, and a bunch of others. You have to manage them all in their web portal which is good for things like herbal work for a production environment. For development, this centralized form helps a lot. I think that those IDEs and the Visual Studio code were instrumental.
What Commandeer does is like you have the tree view on the side, but then you get to see a detail, whereas we thought about building plugins for Visual Studio code and for IntelliJ, and things like that. But the part that gets lost is that everything happens over in the side nav, and there's no like real Dynamo viewer or s3 editor and things like that built into those IDEs. I think that those are some inspiring tools.
Ryan Jones Q: What would happen in the future with Commandeer? What is the approach of Cloud developers?
Bob Wall: The initial incantation of this has been worked on now for two years. It was to get a robust state in terms of the AWS ecosystem, but the way I see it going forward is to plugin system. I have tons of different thoughts on open source versus closed source and things like that. But I'll stay away from that. So, somebody wants to build an excellent Redis UI that you can manage Redis locally and in the cloud, you could build a plugin for Commandeer, plug it in, and become accepted into the community, and then become a usable service. The concept is that if you have a managed service out there that you would be able to run locally via Docker and or in the cloud, then Commandeer can provide that kind of rails.
Ryan Jones Q: Are you all centrally located, or are you dispersed across the US or the globe?
Bob Wall: We are basically in the US, but we have some Overseas support too. So, my core was me, and another co-founder built most of it with a little bit of help. But he is in the world of Serverless Guru, where he has a consulting firm. One of the things about building a product is that it's all-consuming. Hence there is a sliding scale of what your passions are on these things. Alex is now focused on his company, and he is also an advisor. I have taken the reins of Commandeer, and we're figuring out the best strategy in terms of how we're building out the team forward because it's a super small group. We have done it mostly in bootstrapped ways. It's like a deal.
My vision of Commandeer is that it does give tons of different insights that you can't see anywhere else. But in my vision of it is still another 18 months out of features to get it solidified. At that point, there becomes this open plugin type of system. We have Devs in Latin America and Russia and start-ups in Ukraine, and then we have other devs in India. So, a lot of this stuff overlaps. I imagine the podcast versus the consulting of serverless with serverless Guru and, and beyond that, it intertwines. So that's the team.
Ryan Jones Q: What insight have you got from the early adopters of Commandeer? Are they start-ups or the established ones?
Bob Wall: The exciting part of the demographic is that it's over 160 or 170 countries that have gone to the website, and it's somewhere over 100 or 120 countries that have used the app at this point. So, it's an excellent, worldwide problem that people are seeking out an answer to. It's ranging from billion-dollar or fortune, like ten companies, to one CTO with a co-founder. It's a full range of hundreds of different companies in each of those buckets. It is a acute problem that people are seeing over the past two years. I think that the serverless wave is not even hitting where we're just going to see more and more companies investing in it because the promise of serverless to me, at least in terms of non-Greenfield projects, is that you can add new pieces of functionality. It doesn't mess with your Big monolith crazy system. We'll never know our system in terms of it being like a monolith again, but now we have all these different serverless routes that are happening. You start to see it more and more with big companies. We have tons of people for one particular company signing up for it individually, which is a good sign.
Ryan Jones Q: What has been one of the biggest benefits for the clients who use Commandeer?
Bob Wall: The exciting thing was that a company reached out and said we have 1000 lambdas, but I can't see anything in your product. It is an AWS SDK that we're using. We had to loop through the token and figure out ways to do that in a highly cached way. People using DynamoDB reader, doing a demo with them, like showing us the issue where they have like rows of data. They are like, I'm looking at ten rows, but I have to scroll through 100 pages of the web or the browser or the app. So, as we're putting things out there, and people are using them, productivity gains or adjustments to the system to work better.
One of them is interesting that Gov cloud, people working on Gov cloud, can mimic that and local stack. But you can't get keys as a developer to Gov cloud. So when you're building stuff, you're flying pretty blind in terms of you're writing it and then throwing it over a mountain. It shows up in the Gov cloud. People are running their lambda invokes in Commandeer and testing it locally against local stack and feeling confident that it will work the same way. On the glove, Gulf cloud stuff, it's even more as we see different use cases across the board; we have a perfect way to do end-to-end management of SQS.
We see that the workflows to where you can throw messages in the queue, see the invocation logs of the lambda, complete the cycle rather than just testing the lambda that consumes the queue, being able to take one step up and drop a message into the queue and see the invocation. There is a slight difference in invoking it directly versus it being invoked by the other service, including permissions and stuff like that, which can be problematic.
World of Serverless
Ryan Jones Q: How you learned and got involved in Serverless?
Bob Wall: About five years ago, I came on and co-founded a laundry service wash. We built an Uber for laundry, and we raised a bunch of money. We launched in many cities, and then the tech team got Aqua hired by Alliance laundry, which builds speed Queen.
So, everything was moving at that point to microservices, and I had it all plotted out. We started building it and what happened was, we were doing all separate repos for each microservice. They quickly became like managing eight products. If you had to change one of them, you can't make that change in every other repo. We met with AWS, talking about API gateway, lambda, and things like that. For the first two months, I just hung out at the office all weekend and replicated everything with mock data with API gateway, talking to a lambda. Thus, the product went too far and became a function as a service. We iterated on that a few times, and we realized that we have an API gateway, can talk to a user microservice, and lambda does its work. But now we have no servers.
I don't build stuff with Postgres databases now. I do everything Dynamo with a dynamo stream coming off of it, and then that stream saves it to an s3 data lake. You put a stand-up Athena against that data lake and query in Athena. Hence, we had to write the serverless plug-in to connect a DynamoDB stream to an existing Dynamo table; as the serverless framework lets you create a dynamo table, it has to be in charge. So, it destroys and then creates it. So, if you make an Ansible, you can't deploy it to a local stack. If you make it into terraform, which is aggressive in terms of it could, you have to delete a table and all of a sudden you lose the table, whereas Ansible just like that concept doesn't exist. They don't destroy things because they don't control the state. We like to build the bread-and-butter discussion that AWS has like, Dynamo to DynamoDB stream, to do some work with your data. But there were zero ways in IAC to do that until we wrote the plug-in just open source. It is just on NPM. It's almost like I'll do kind of higher-level consulting at this point to companies to get them situated with this stuff. Now the playbook is getting a lot better than when it was two years ago. We hadn't built like Commandeer because having built it; certain things had to be added in.
World of DynamoDB
Ryan Jones Q: What was the reason behind you using DynamoDB over Postgres? What are the benefits of potentially using that type of architecture?
Bob Wall: The benefit of DynamoDB is that it's no SQL. Thus, your front-end team can make changes and theoretically mess up the back-end team. If you want to add a new column, it just starts appearing there because it's just standard JSON blobs, which is kind of cool. As it scales infinitely, your data can now be saved. The only thing you pay for is IO. So, the benefit is that you're in a noSQL world, you can move fast, you're not writing migration files, and things like that.
Now, the downside of that. At washio, we built it on parse, a fantastic tool, but it was a no SQL-managed service. Once you raise a few million dollars, everybody at the board meeting summaries and charts and things like that, and now you need to be in a sequel world. The way to do that is like you either migrate that data, or you do MapReduce. The benefit of the DynamoDB stream is that you're laying out your data into the big concept of an s3 data lake. So, 20 years ago, data warehouses were terrific; you would get all your data and stuff it in there. A lot of people will use it. But it's costly.
So, the concept here is that you're just storing flat JSON files or parquet files and never get changed again. Athena sits on top of it and is 100% serverless. You are only paying when you access that data, and you're paying s3 prices. You need to organize it correctly into partitions. Those are like pseudo folder structures inside of s3, and all the files are stored there. Now, you can query things in the Faena, and you never have to worry about your RDS instance is up. You never have to worry about the logs running out. So not only is the data stored serverless or, I mean, you call Dynamo. I guess it's managed, but it's serverless because it's infinite as they can go forever in terms of how much data you can put in there.
Ryan Jones Q: What do you think about Alex or Bri writing a book about DynamoDB, the single table design? What is your perspective on single tables or do you create multiple tables?
Bob Wall: I create multiple tables. You'd have a user and an account table. The account table might have a type column with a string. The reason behind it is that before Dynamo, I did a lot of Mongo and then Firebase. Using Firebase with Washio, we would have the drivers who had an app, and the customer had an app, and we had a like a command center that would have a website. You could chat with each other. All this crazy stuff was happening with pub-sub, and you have to architect your tape your Mongo correctly, or you're screwed.
You have a user's collection. Under each user, and then in the chat section, you have a pointer of user ID. You don't put that whole user object in there because if you incorrectly make no SQL, it can be the worst thing. So, same with Dynamo, if you make that single table layout, you're going to have many issues in map producing. You also can't visualize it. So if you go into Commandeer and now, look at Dynamo, we do foreign key inferences. So this is where I always post Dev channels on Reddit and things like that. This was a one where it was interesting because we're using foreign keys now in DynamoDB. People said that you couldn't have foreign keys.
And my premise was that if you're looking at the account record, there's a user ID column in there; the computer system has a pretty good idea that that means it's the user Id. So, in Commandeer, you can see your user record when you're looking at that account record. Yes, the beauty of Dynamo is that it's no SQL, so you can mess with your model. But later on, when you're trying to look at data, you do not have to go to another tab and go to a user table and do a search by ID for that user. We have an ER diagram tool in there that shows you the ER diagram of your Dynamo system, which I find helpful. I've embraced the fact that no sequel exists. I'm doing kind of a hybrid with it, because of the fact that if you just have one table, you're messing it up for your dev team, and it's going to be a painful way to like to kind of manage changes.
Ryan Jones: Yeah, this is a fascinating topic. When you start on something and think about those things, not every solution is the right fit. The idea of a single table design has benefits even though it's complex. The idea of making changes, or then handing that off to somebody else on my team, be like, there could be 15 tables, or ten tables, or five tables, but we've got one table, and it's just all this massive flood of data. As the tooling is not there, there would be tons and tons of data, but he can't use the UI at that point.
Bob Wall: Yeah, you can't use the UI. When you're building it, initially, the whole product fits in your brain. But, when you have 100 developers, that's a significant problem. And the other thing we do in common is that when you have 20 Dynamo tables, you can see the columns of each one. I'm just selecting the top record for each Dynamo table and then showing it to you, but it's not accurate. If you had a million records in that table, and later on, you started adding more columns, it won't give you an accurate representation of your data. So, if you have a user table, and it always has ID first name, last name, email, you know, phone number, in Commandeer in the tree view. And in the thing, it shows you each of those columns, and it tells you if it's a primary index, or a global secondary index, or just a column and what the data type is. So, if you make a table and stick to the model, you know those 20 columns. Otherwise, you only know that through code or an API request in postman.
Future of Serverless
Ryan Jones: How do you view the current serverless and what does the next five years look like?
Bob Wall: I think container systems are the future of system development. Thus, big companies will have their prem systems and data warehouses and stuff, but newer companies are going directly into the cloud. And I think that the container world is one great use case; like you have a huge monolith app, you can throw it into a container and expose your API from it. It exists now in the cloud. So, if you go to a restaurant, the computer systems look like they are 50 years old, so there's not going to be a mass replacement of every system, but you're going to run it into a container. But serverless is this whole other paradigm, and now you're in a world where your costs are entirely dependent on usage and not on the system. You can add on serverless to existing systems and start making products faster, do new projects that don't like exist inside the monolith world of your existing system. I think that they're both going to grow considerably. I also believe that they go so hand in hand, sort of like we built a Docker UI into Commandeer. We didn't plan on doing it. As we have runners for serverless, framework, Ansible, and Docker Compose, we had to use Docker under the hood to stand up those services to run on Mac, Windows, and Linux. Then, we had to build a UI to see our Docker system. We are trying not to go too deep into the container world, but it's hard not to like the ECS and EKS. I think that there's lots of GKE now, the Google Kubernetes Engine, where they're managing it. I also feel that Kubernetes is amazing, but in this complex world, I think what they're now doing is getting commoditized. We will be the same way as serverless, where you want a lambda, you don't care how it exists; it just exists. So, I think the serverless world is just going to grow. I don't know, 100x in the next five years or 1000x.
Ryan Jones: Do you have anything to promote?
Bob Wall: I'll shout out Commandeer. The tool is definitely very stable and useful. In addition to the serverless world, it takes people to realize the use cases for themselves. I built Commandeer so that I never have to look at s3 and Dynamo outside of this tool on any project. When you usually are working, you feel like, oh, I need an email, I'm going to Gmail; I think those use cases keep growing. That is what I'm plugging in.
I am also into demos. Some demos are like, hey, how does this work? But a lot of times, I'm ending up debugging their serverless.yml file and getting it to deploy the local stack. Hence, I'm not strictly focused on the consulting side of it. I'm more like providing a service or a tool to help things. I don't mind a free tutorial or like helping to get it because I think that's the biggest thing that people hang up with now on serverless.
Ryan Jones: Definitely. To those listening, this has been the Talking Circles podcast with Ryan Jones. If you like our show and want to learn more, check out talking serverless.io. Please leave us a review on iTunes, Spotify, or Google Podcasts, and join us next time as we sit down with another fantastic Serverless guest!