Ep 27: Gojko Adzic Partner at Neuri Consulting
Joshua Proto: Hello, listeners of Talking Serverless. It's me again, Josh Proto, and today I am joined by Gojko Azdic. Gojko is a partner at Notary Consulting and author of Running Serverless. Thank you so much for taking the time today to join us on Talking Serverless.
Joshua Proto Q: Tell us about your journey as a developer interested in Agile development. How did you get involved in the serverless space?
Gojko Azdic: As a developer, I've always been interested in tinkering with lots of different things. When I started making money for programming, I had to know my way around system admin as well. You would expect, when delivering a software to somebody, to also clean their email when it gets stuck, set up servers and wire everything up. I used to do a lot of Linux admin as well. Over time, I realised that I couldn't really be good at both. I wanted to focus on one thing, and I made the decision to focus on software development, not on admin. It's a funny thing, because 22 years later, we're getting to the point where it's almost zero operations. I still enjoy doing a bit of system every now and then, but one of the biggest benefits of using serverless technologies is not having to worry about the administrative or the operation side of things. At the moment I'm working on two products. One is a collaborative mind mapping tool that's used by millions of school children worldwide, and we've been writing it since 2013. That's a relatively stable product; it's been around for a while, it doesn't change that much, and it's all running in AWS lambda. We fully migrated to AWS lambda in 2017, so we were one of the early production adopters of lambda. I don't know many other people that were 100% in production of lambda back then. The other project is an interesting thing that's still in Beta. I expect it will get fully launched in a month or two. It allows developers and techies to edit videos much faster than the traditional editing tools. So it will allow you to convert the markdown into narrated video, using all sorts of wizardry to generate life-like sounding voices from the text people put in. This actually came as a result of some demo videos that two colleagues and I were doing for a serverless toolkit that we released in 2016. We couldn't keep the videos up to date easily, so I decided to write my own program that keeps videos up to date. It basically does source code control videos.
Gojko Azdic: I don't know how far back you want to go, but my journey as a developer started when my father bought the Commodore 64 for himself. I was six years old, and my mother would kill him if he had told her that he bought the computer for himself, so he had to pretend he bought it for me. But because he bought it for himself to tinker with it, I was the only kid in the neighbourhood that didn't have a cassette player or anything, all I had was this wonderful thing that's supposed to be mine and a really, really thick German manual. I didn't speak a word of German, and my father could pick and poke commands that would make the machine do something, or show some colours on the screen. So I started being interested in programming before I started going to school or writing properly. I started doing some pick and poke copy-paste software development, which is what most people do on Stack Overflow these days anyway; I was doing Stack Overflow development better without Stack Overflow. I started professionally making money from development about 21/22 years ago. I got involved with the community of people who were really interested in Agile development. It wasn't necessarily the first cycle of it, which was the late 90s, early 2000s, but I was probably in the second cycle of that.
Gojko Azdic: I really was fascinated by being able to develop stuff a lot more productively than with a very heavy process. One of my interests for that was test automation around them, because I worked with a company that had a few 100 database developers, and they were all testing stuff manually because apparently you couldn't automate any of that, which was ridiculous. So I started writing a way for people to automate database tests easier and from there things just took off. It turned out that lots of other companies had similar problems. Lots of banks had problems with automated testing, or databases, and banks have too much money so it would have been a shame not to work for them. So then I started working as a consultant on a mix of development, Agile testing and a bit of process etc. There, I got interested in product management a lot because I started building my own products. For a while I was more interested in product management and I learnt more about that, then for a couple of months I was more interested in testing. These things feed off each other because the better I am at testing software, the more high-quality software I can produce, the better products I can build, and then I can use product management to figure out how to build a better product. After that my bottleneck moves to my development skills. I learn how to be a better developer, and then the bottleneck moves to my testing skills. It goes around and around and around. I've been very lucky to basically work for myself for the last 13/14 years. I get to choose what I do. I work as a part of a 4 person consulting company where pretty much everybody chooses what they'd like to do. We keep it very informal, but because of my partners and because of the work we are a reasonably well-known consultancy within the Agile space. They have taken me to conferences worldwide. I've met a bunch of people.
Gojko Azdic: I don't really know how I came across serverless or lambda. I probably heard about it at a conference and just fit it in at the right moment for us. In 2016 with his mind mapping tool we built, we were having to look at the limitations of our current architecture, and we wanted to experiment a bit with other things. The original prompt for us being interested in lambda was I had what I thought was a brilliant feature idea for connecting Mind Maps with the Wikimedia knowledge graph. My idea was that if people wanted to do research on a topic while writing about it in a mind map, why not just bring it all into the same interface? I thought it was kind of a wonderful idea. I proposed it to my business partner, there was only two of us working on line work, and he said it was a really stupid idea. I wanted to prove him wrong, so rather than argue about whether it's a good idea or not I thought, well, I'll just build it and release it into production and measure the effects of that. And I'll prove him wrong. But because we were already running at the limits of what our system could do in the current architecture, he was insisting that my work couldn't pollute any of the current code, so I decided to build the integration of lambda because it was auto scaling and I could tinker with it. I built the integration in lambda which took me a couple of days, I launched it hoping to prove that I was right. I ended up proving that he was actually right; nobody used the feature, but the experience gave us an interesting option to solve architectural problems.
Migration to Serverless
Gojko Azdic: So somewhere in 2016, we started gradually migrating things to lambda, and by February 2017, that was done. It was auto-scaling, it was self-managed. I think that aspect of serverless is the one that appeals to me the most, this whole auto-scaling, self-managed thing. And with Mindmup there's only two of us, so we do everything from product demos, pre-sales, selling and developing, supporting and managing everything. So if I need to spend time dealing with operational issues, or if I need to spend time dealing with system administration, then I'm not spending time with something that's a lot more important. Like I said, a long time ago I even knew how to do a system admin, but I don't think that's where my value is in this thing. Being able to focus on value, spend time building real value and delivering real value through what we do is really important because we get to invest our time better. And again that's not even the only thing I do, I do a bunch of other things as well. So being able to just keep them running without worrying about it is really important.
Gojko Azdic: I think for me, the penny really dropped on how valuable this thing is with HeartBleed, and the second one was sledge something, or hammer something. When that was discovered, everybody wrote about it and it was in the news. Suddenly people became aware about vulnerabilities in CPUs, especially running them in a shared environment. When the second one was discovered, it was during August I think, I remember waking up, drinking coffee and looking at what came in overnight. The tool had customers all over the world. So we had somebody from your time zone, where you are much later than we are, and during the night, they discovered this vulnerability. A concerned IT admin for one of our clients sent an email saying, we need to know how you're solving this, we need to know your mitigation plan, we need to make sure that our data is protected and our processor is protected from this. And I'm drinking my coffee in the morning trying to wake up and figure out what it is, so I copy and paste the CVV code for this thing he told me about. The first result on Google was that AWS lambda was already patched. So for me, that was really when the penny dropped. I realised this is massively more valuable than just computing on demand and paying for every 100 milliseconds, or whatever. Somebody took what I usually had to deal with, working with containers or servers, or working with virtualized environments, and dealt with it.
Joshua Proto: "... the less time you have to spend troubleshooting the operations, overall, the better your product will be."
Gojko Azdic: I think we have a very specific constraint in that there's only two of us working on this product, Mindmup. Yet we are able to compete with companies that are several orders of magnitude larger and much better funded, because we can focus on the things that really matter. I think lots of large organisations waste energy on stupid stuff that people shouldn't even be doing in the first place. It's amazing to look at how people keep reinventing the wheel over and over again. When you have a large company, it's easy to hide stuff that doesn't make sense. But if you are working with a very small team, it's impossible to hide. If I need to spend half the day cleaning logs or figure out why my system's recycling every hour because there's a high user load, I'm not spending my time actually delivering value, I'm just cleaning up after myself. Amazon, Google and Microsoft's party this year, they have people who do system admin for a living, and they do that much better than people who are paid to do it in large organisations because that's their bread and butter. If you have a bank, your bread and butter is moving money. If you look at the cloud providers, the key value is being amazingly good at managing and monitoring systems. The fact that we can pay them to do that for us is wonderful. I think lots of people try to compare the cost of running in a serverless way to owning your own server. And they just look at, they could buy a server for a thousand bucks. Or they could get an instance on Digital Ocean for five bucks a month. Yes you can, but how much is your time worth, the next time we need to patch up the database, or figure out how to cluster escape processes, as opposed to you being able to develop something else? I think even for employees, developer time is the most expensive thing. Now with software, it's not hardware, it's not the cost of operation as very few products fail on cost, because the running cost is too high. Lots of products fail because what they've developed is wrong.
Joshua Proto: "A hard thing that I have run into when talking to people is if it's not cheap, they don't really want to do it. The people who we've seen use it the best, they're actually going to be spending more money now that they have a serverless system, but their productivity is times 100 because it's money that is valuable to the entire business."
Gojko Azdic: The cost is an interesting factor there. Because with serverless you pay for your application being busy, then if you can engineer the application in a way so that it's not busy, then it's a massive win. Serverless is actually the first time I have seen a costing billing model reward good architecture. What I mean by that is, often people teach good design and good architecture is low coupling, high cohesion, relatively independent things that you can fit in your head and that don't have lots of side effects. But where people will host things, from forever ago up until a few years ago, the deployment architecture billing model starts to prevent you from actually doing that at runtime, so we have modules that are isolated at design time and development time. But when you run them, people tend to patch things up and put them on machines. When I was still working with large companies or software development as an employee, they would get immortal storage that was never supposed to die, and it would cost more than a house. They put two instances on it that had top of the range processors back then and everything redundant in them, so they were never supposed to die. The problem is, once you have a setup like that, everything has to go there, and things can interact with other things. You get a stupid problem in one of your modules that ends up taking out all the processors there and being taken out of disk space or causing other runtime issues.
Issues and Benefits
Gojko Azdic: 15 years ago we were troubleshooting memory leakages in strings in the ASP 2.0 costing environment on the IIS, because with such a heavy load you can get memory management of one web page starting to mess up everything else that's running. I think the problem there was, if you have reserved capacity in those three immortal machines or 20 virtual machines or Digital Ocean, everything goes there because you want to keep them busy. On the old architecture, we had different format converters; people see Mindmaps and then they want to convert it into PDF or an image. PDF was the busiest one, because people would need PDF to print for different things and PDF is a memory hold since we use ghost script. We also have something like the SVG export template, which requires almost no memory because it's just a text file for the markdown exporter. It would be silly to get separate virtual machines for the PDF, a couple for the SVG, and a couple for the PNG, because we'd have to run it on 200 different virtual machines, when really five or six running together and sharing capacity work perfectly well for what we need to do. So of course, we did that to save money on hosting costs. But then I had a really stupid bug where the SVG exporter did not clean up the temp space correctly, and it filled up the temp space on one of the exports of VMs. The other exporter VM started being busier, and it filled up the temp space in that one as well. Linux starts misbehaving really badly when you fill up the temp space, so it was a domino effect, and a really stupid bug took out all the expertise we had.
Gojko Azdic: There really are no benefits from bundling all of these things together. In fact, there's a benefit from un-bundling them, because if you have one module that's a memory hold like PDF, and one that doesn't need a lot of memory like SVG, keeping them separate makes SVG processing cheaper. Putting them together means you have to pay for the higher memory capacity even when you don't need it. Whenever people ask me about the cost of serverless, I try to tell them that you pay for different things and you can absolutely make it cheaper than container hosted reserved capacity. In a lot of environments people have to over-reserve capacity, because the time to recover is slow, and you have to think about how they're warming up caches and bringing up all these things. Usually they have more capacity than they need, just because the peak might increase and they might need this load. So very often people pay for reserved capacity that they never ever use, with pay per execution. This morning, somebody published a video on YouTube demonstrating how to use one of the tools I built in schools. For whatever insane reason it became popular in Russia, and I had a month's worth of activity on the site in six hours. But because it's all auto-scaling, it managed increases on demand, so I didn't even have to do anything about it. It was just funny watching the graph explode, and I know I'll have to pay more for that peak. But that's perfectly fine because I got good business funding out of that. Having that, just in case somebody publishes a YouTube video, so we can auto-scale and reserve capacity; it's a way to wonderfully waste money. Of course there are better ways to auto-scale today, but I think comparing it to having a dedicated machine is wrong, because as soon as you have a dedicated instance, you're going to put lots and lots of stuff onto it so it becomes a little difficult to know what actually costs what. What becomes possible is actually knowing how much money a particular function of your system makes or loses, and whether it participates in a critical flaw or not. My experience is that we were able to make much better optimization decisions.
Gojko Azdic: With Mindmup we have a nice comparison point because it was running on a hosted environment, a container cost environment, in 2016 and 2017. It wasn't until lambda, when we added about 50% of active users during the year, that our usage costs for hosting dropped to about 50%. Taking all of that into consideration, my guess is that by migrating to lambda and re-engineering the system to work in that way, we actually saved about two thirds of our operating costs. There was research, and I can dig up the link for that so you can share it later with your listeners, where IDC ran an analysis of early serverless adopters. Their conclusion was that on average people save between around 50% and 70% in operating costs and become almost twice as effective in terms of delivering new features to the market.
Joshua Proto: "... in today's global competitive world, being able to launch something and get it in front of people to get that feedback, and then iterate off of it, makes serverless just that much more beneficial."
Joshua Proto Q: Is serverless a toolset that works well with an agile methodology, or is it that everything about serverless is more efficient in reducing operational expenses and time to market? Does it pair easily with legacy, management and productivity systems?
Gojko Azdic: I've never really thought about it that way. I think being able to develop faster and being able to deploy faster are two things that can help separately. I think serverless is a liberating structure. It's something that removes the shackles of having to think about massive 'Big Bang' deployment and design, and how it interacts with everything is definitely helpful. You can deal with smaller problems and concerns; I still see a lot of people bring up their entire stack just because they can be the single massive templates. And although that's wonderful as an exercise, I think if you have to deploy all your other functions when you deploy a single function, you've lost the benefit of the serverless idea. That comes back, not necessarily to Agile, but it comes back to designing systems to work in an asynchronous way, so they work well in a highly distributed way.
Gojko Azdic: I think one of the key problems that people find when they start migrating to lambda is all of a sudden, they realise that they're designing a highly distributed transaction processing system. That's a totally different way of thinking from working with a monolithic architecture or working in process communication. One of the things that drew me to lambda is that I actually enjoyed making these systems for large banks, when I was still programming for other people. I've learned the hard way how to structure things, and I've worked with some brilliant people that were really good at designing these high throughput transaction processing systems, so I've learned a tonne about hand designing messaging protocols and things like that. Once you start thinking in that way, lambda becomes really easy.
Gojko Azdic: I remember talking to somebody from the LLM community at a conference in Berlin two years ago, and he was talking about how, with Erlang, you have to design the protocol well, so you can make mistakes later. If you've not designed the protocol well, then nothing's going to save you. The protocol in Erlang is a highly distributed system, so it's asking, how do these components communicate with each other? And I think it's the same problem in lambda; how and when do all these teeny-tiny bits and pieces that run in parallel talk to each other? How do they collaborate in a way that they can be fairly independent? If you can crack that, then this becomes incredibly powerful, because then you can bring one function up, bring it down, it deploys lots of different things. Because if you've designed all these things so they all have to go live at the same time, and changing one bit in one of your lambda functions requires you to change the same bit in 50 other functions, you're going to be in a world of pain.
Joshua Proto Q: How do you communicate to different levels of understanding? Is there a way to teach someone right the first time? What's your approach for teaching someone about this?
Gojko Azdic: My approach to teaching people is giving them lots of good examples of how things work. I think there's a lot of stuff people need to learn about the actual infrastructure, how it works, and how to not be beaten by limitations that might not be obvious. One way of teaching people is if they leave, they already know how to design distributed systems, so I can just teach them the infrastructure bits and pieces. What I tried to do with the book is gradually give people nuggets of how to design good distributed systems, and how to design aggregates of data so that you don't have to worry about synchronising things in multiple places. Also how to figure out what you need to pass in where, and how to make sure that things work well. I think one of the biggest mind shifts is that everything is eventually consistent. It's not transactionally consistent; it's very difficult to know what's happening when these things are 'magically' running so much. I don't believe in just one approach, but if there is one that's good to start with, maybe teaching simply how to manipulate the architecture and the infrastructure, and give people these challenges that will help them understand why it's important to design good protocols for their application components.
Gojko Azdic: Different people have different things they need to learn. I've worked with people that work front end web developers and have never really built a server component. We taught them how to do serverless in conferences in a couple of hours, but what they would learn is basically how to set a simple API to connect things to the backend. That way, they wouldn't have to bother anybody for something simple. If you look at somebody who is a more serious client of architectural teaching, those would be people who've done relatively serious systems in larger organisations, where they were used to thinking about architecting for reserve capacity. Helping them figure out what happens when you no longer have to think about reserve capacity and how that scales is really important.
Joshua Proto Q: Do you think that the majority of people can still benefit from hammering down those principles and learning about the serverless environment more? What are you looking forward to continuing to teach about serverless?
Gojko Azdic: In general, I think these are good things to learn. Decomposition into smaller problems is something that software developers have to do every day, on many different levels. If you look at the history of software development, it's basically getting better and better at decomposing things, and getting tools to work on smaller problems at a time. Lots of people find themselves having to design these distributed, basically transaction processing systems, and for people that have done that already beforehand, serverless becomes a no brainer. Lots of different things about designing systems to work that way are just generally good. I was learning about domain driven design in 2004/ 2005 and looking at anti-corruption layers and designing systems so that you encapsulate knowledge in layers of parts. One of the ideas that co-existed around that time was what Alistair Coburn called hexagonal architecture. The idea was how to design these highly distributed systems, and it applied really well to highly distributed systems that I was working on at the time. It was basically a way to structure the design so that you can test all the components well. When it comes to serverless, one of the most common problems I hear from people is they don't know how to do automated testing on it, when they have no idea where it's running and it’s so baked into the architecture. It becomes really difficult to do any kind of smaller tests when you have to do integrated, end to end tests for everything. If you know how to design the system well, then you can apply hexagonal architecture, ports and adapters that have been around for 20 years. So learning about those kinds of solutions is good, even if you don't do lambda, because they're generally good solutions to recurring problems we have in software.
Features of Lambda
Gojko Azdic: I think this lambda deployment model really rewards good design. Good design is good to know regardless of whether you're going to apply it to lambda or not; lots of people suffer from designing their app in a really nice way, and then having to basically deploy it on something that doesn't necessarily map to that. The whole deployment architecture with serverless seems loosely coupled, and if you can do highly coherent stuff and not put the whole kitchen sink into your lambda function, you can benefit greatly from it. There are very few things that are specific to lambda that aren't generally applicable to designing good complex systems and learning how to do good designs, which is a generally useful skill.
Gojko Azdic: There is one more service-specific thing to it, and it goes back to the idea that Erland had. If you design the protocol between the components, the architecture itself is very forgiving to mistakes. If you've designed it to be decoupled, you can easily decide to rewrite it from scratch if you make a mess. You can make lots and lots of mistakes within the box. As long as the boxes are communicating solidly with each other, then it's very forgiving to mistakes and trying things out. For me, multi-version is an absolutely underrated feature of lambda. Lots of deployment toolkits are not positioned to use multi-version well; they tend to deploy whole sets of functions. Something that's still missing is being able to do a really stupid experiment with this stuff and not expose myself to too much. Say I'm just going to create a teeny-tiny, new version of this function that is going to run for my low value customers, or my customers in a certain location. The fact that you can do multiple versions of a function that run concurrently and then compare them is amazing, and the old one is still there whenever you need it. That's wonderful. That's one of these aspects of not having to think about reserve capacity anymore.
Gojko Azdic: Say you have a big enterprise customer, who wants something and they want it now, but it's completely at odds with what everybody else wants. You have two options; you have an option of developing stuff for these people and breaking it for everybody else, or you can develop it for these people, and then spend another six months consolidating it and making it work for everybody else before you deploy to production. What multi-version offers is a third option; developing apps for these people and giving it to them, but keeping everybody else on the old version. So they get time to market, you start charging them more, and everybody else is unaffected by it. Then over time, you can bring up the functionality to work correctly for everybody, or consolidate it, improve performance, compliance or whatever it needs to do to make it globally useful. Or, if they're such a big customer and their stuff is completely at odds with everybody else, you can just keep their version running forever. It's making money, and that's it.
Gojko Azdic: We used to take PayPal subscriptions, a while back before PayPal became hostile to us. So we decided not to take any more PayPal subscriptions, but there's still people who pay us legacy subscriptions with PayPal. We were changing how the payments infrastructure works, changing the internals of it. But when we started testing the new one we realised that we completely broke the old PayPal flows. Now this whole PayPal legacy thing is, people are still paying us, but we don't want to lose this money. We would normally have the option to take this nice new clean architecture and make a mess of it again, because of legacy stuff, or bite the bullet and stop taking money from the old subscriptions. But because we had this running in lambda functions, we just made the decision to route the paper messages to the old stuff. We weren't adding new functionality or adding any new subscriptions using PayPal, so it could just run using the old stuff until it died out. Three years later, people were still giving us money with PayPal because of that function, and we have a different version of our payment architecture that's for everybody else. This idea of being able to have multiple versions of the same thing running at the same time is incredibly powerful. But in order to do that, the protocol between components must support the fact that everything else can receive a message from the scalable architecture.
Joshua Proto Q: Do you think this will make computers and humans get along a bit better? Are there any key problems that will prevent that from ever happening?
Gojko Azdic: I wouldn't be surprised if, 15 years from now, we have a whole movement of people migrating from this type of architecture to something else. These things are cyclic. Jokingly, we can talk about lambda going back to mainframe computing and timesharing, because that's what it is. Mainframe computing was so stupidly expensive 30/40 years ago, and then you have the PC revolution, client server architectures and three tier architectures. So now we're going back into another cycle of the whole thing. I think in our industry things are a cycle. So hopefully, we start getting better and better tools, we work on a higher level of abstraction, we get more time to deal with more interesting problems. I think this whole infrastructure on demand aspect is really interesting.
Gojko Azdic: I have a friend whose daughter is at university, and she's doing some project on her own and needed a bit of advice around how to read data from a server. My friend recommended that she talked to me, and I started explaining how I would do that stuff. She didn't really know how to fit this into her app, and asked if she could show me what she'd done. So she shows me this wonderful thing she's done on her own in a couple of hours, where we Firebase in Xcode, and just draw the screens. She put together like 90% of an app without even thinking about API, HTTPS, or what a database is. Of course you can do stupid things with that, but I think it's incredibly powerful. Lambda and serverless are just gonna have another generation of tooling that we have to play with. So hopefully, people spend less time fighting the infrastructure, and actually solving interesting business problems. Because we are already at the point where you can get 90% of the way there without even learning what HTTPS is. It becomes incredibly powerful, but really sad at the same time, because I enjoy knowing how things work.
Gojko Azdic: If you look at this stuff, you can deploy a bunch of functions into lambda and invoke them in many different ways, and you don't have to worry about clusters. You don't have to worry about failover, monitoring or scaling. It is similar in that this whole new generation of kids are able to just take Xcode, Firebase or Amplify, and knock up an app. I think people will be able to knock up much more powerful apps, whether they're successful or not. That's a totally different question, but I think it's a liberating thing. A huge amount of complexity that typically causes problems; fail overs, messaging, plays, monitoring, observing and things like that, and ranking that from Amazon, Google or Microsoft, lets people deal with problems at a high level. It hopefully lets them deal more with making their customers happy and making sure their customers fight computers less. My hope is that this is another step in having to worry about fewer things when you're designing your app, so you can actually spend more time solving the problems that you actually wanted to solve, in a better way.
Joshua Proto: "... that's the ultimate aspiration, and a goal that I feel serverless is able to accomplish, especially if you give it enough time to create testable, repeatable patterns for your clients, create a really good functioning architecture, and then share that knowledge."
Joshua Proto Q: I saw recently on your blog that you had published some expanded findings on the topic of specification from example. What were some of your findings from that?
Gojko Azdic: It's been a passion of mine for a long time, I think it's by far the most powerful way of taking people into a shared understanding of what they want to get. I've been working in that area for a long time and I've written several books about it. One of the books that I wrote actually came up 10 years ago, and it's a retrospective on that; it's kind of a specification by example book, and it was actually my third book on the topic. That was the first one that was really popular. My colleagues and I did a relatively large survey about what's changed in the 10 years since the book came out, and I think one of the things that has changed for better or worse is, out of multitude of formats, how people captured this information. I say for better or worse because I think people have taken a too shallow approach to these methods for automating tests. That's something that was never intended. And then they complain that the method doesn't work, while not even trying to do it. From one perspective, it's great because it lowers the bar for lots of people to try using this method, but from another perspective I see that as many more people tried to use it but didn't even try to use it the right way and then discarded it.
Gojko Azdic: One of the things that I was really pleasantly surprised by was that about a third of people that we surveyed got enough value from collaborative analysis and cross-functional specification workshops, so they never ever ended up automating any of these things as tests. That is on the other end of the spectrum to people misusing testing, and I think that was quite encouraging because for a long time, we'll be promoting the message in the community that it's the conversations that are important, not the tests that come out of it. Of course, it's wonderfully useful if you do the right way, but it can also be harmful if you do it the wrong way. I think more than half of the surveyed participants said that they are doing these workshops in a cross-functional way; they have developers, testers, business analysts and customer representatives together, coming up with the specs. I think that was a wonderful encouraging message because it means that all the work we've put in is part of a community. That was one of the goals of my consulting company when we started it, to promote this thing and help people become better at it, so it means we've succeeded in that way. I think there's a whole challenge around a generation of people that we still need to educate, but I think we're moving in a good direction.
Joshua Proto Q: If people, including myself, are interested in following that methodology or learning more about you, how would you suggest we do that?
Gojko Azdic: Well my latest project is called video puppets calm, and I think people in your audience are one key target market for that. So I would suggest people checking it out. It's allowing you to have version control, source codes control, GitHub, CI, build videos from source code and images and a bunch of things. The idea with that is you can very easily create demos for your features and keep them in sync when your app changes or creates more explanations or tutorials. If people are interested in what I'm doing, I suggest checking that out first. The other way of looking at other things that I do is the Running Serverless book. It's on Amazon and big online stores, and in better physical stores as well- if people are not scared to go into physical stores anymore! I don't blog as often as I did, because I'm working on several products in parallel now, but I do have a blog that's active every few months. I've been blogging for about 15 years now, so there's a lot of material there on the topics we talked about; architectural design, testing in a distributed world, serverless, methodologies and things like that. That's on Gojko.net.
Joshua Proto Q: Is there anything else you would like to add?
Gojko Azdic: I'm really excited about this whole idea of permanent storage for lambda functions. I do wish to look into that and it's something I've just started tinkering with. That's why I was mentioning that things are cyclical and tend to end up repeating themselves. We get into a situation where, for the last four years, when you have people explaining lambda, you would hear them say you can't share anything between lambda functions. It's all isolated, it's all transient, and you have to design your stuff in a way that doesn't expect sharing between running and that lambda functions. We're now with permanent storage and effects storage; you can actually have multiple lambda functions collaborating on a file system at the same time. I think that opens up a new set of use cases for lambda and changes quite significantly the profile of how people are going to teach lambda. You can now start approaching lambda as a more traditional interconnected server as well. Whether that's good or not, I have no idea. So if people have heard about serverless on your podcast before, as I assume they have, and they don't know about EMFs in London, I would recommend they check it out because that's something that's occupying my brain quite a lot these days.
Joshua Proto: Thank you for sharing. I'm interested to see what happens. Thank you Talking Serverless listeners for listening to us today. I am looking forward to the next podcast we share together!