Serverless architecture is a hot topic, and promises several advantages, because it allows developers to skip the time and overhead of planning infrastructure. But how does it change the way one starts a project, and is it worth the hype? In this talk, Ben details his recent experience with AWS and advocates for designing with serverless architecture at the onset of a project. Designing and prototyping around serverless in the beginning allows for quick and inexpensive testing of the ideas by creating the business logic in lambda functions, rather than spending time on configuring services and permissions. Additional savings are passed on to the client as they are only charged per transaction instead of having the overhead of running their own server. Serverless promises to be an exciting and cost effective direction for designing applications.


The following is the transcript of the above video from tech talk was given during lunch at GenUI on March 14.2019

Ben: [00:00:40] The tech talk that I'm giving is serverless and serverless which is basically covering serverless architecture implemented using the serverless framework.

Ben: [00:01:00] So what is serverless is one of the things we're going to look at. And then what is the serverless framework. Apologize. I tend to go in a lot of setup on these things. Just to try to get my point across. But the thing the thing I want to cover first is like why am I presenting on this. This is my first time to be in front of any of you guys since I've been in the company for a year and a half. So why all the sudden am I here. Well you know in our world of technology there is an overwhelming amount of tools, skills that we have to learn. It's just it's just like it's pretty ridiculous like I get I get frustrated and you know it's like I got whatever I just going to learn one thing and try to do it well. Sometimes we come across these silver bullets that you learn this thing your capabilities your values and engineer will go up by 10 x. Those are pretty rare, but they come along. And I think like the serverless framework is one of those tools that I like if you got good at that your value and what you can do what you can offer to clients what you can build in your own applications or whatever is going to go up maybe not 10x but it's going to be a big factor a significant factor that. So. So that's why I'm presenting this.

"...the serverless framework is one of those tools that, if you got good at, your value is going to go up a significant factor."

Ben: [00:02:29] So the takeaways that I wanted to just share with you like if you walked out of here. I'm hoping that some of the mental barriers to using the serverless architecture that maybe you have would be removed or at least start to be removed so that you might feel more confident about exploring it. And then to start using a first idea architecture that's a phrase I just coined but start using it as a first idea architecture. So basically, like when you're architecting a solution maybe you like try to come at the problem with how can I solve this problem entirely using serverless like this as a mental model? And so, we'll go into that and then also just the idea that there is a huge value upside that we can be offering to clients through this type of architecture. OK. And so key performance indicators out of this tech talk, like if I if I were to say like six months from now what would be really interesting to see come out of this is that somebody in here would use this framework on a project whether it's a personal project or whether it's a client project. It doesn't really matter anything that kind of moves the knowledge further. And then hopefully like if you move your knowledge further, you'd come back in here and present an extension on this tech talk. So, we could just keep learning together collectively. And then also the stretch goal would be like totally awesome if we could actually know enough to walk into a client pitch and be like "Hey we're going to give you like all these things with this one technology and nobody else can do that", and so they sign a contract. That would be like a really cool stretch goal.

[00:04:25] So disclaimers I talk as though JavaScript is the only choice for this implementation. It is not. It just happens to be the thing I'm familiar with. So, don't get hung up on that. Python and Ruby also work on the serverless framework. So, I also focus on AWS pretty much. There's plenty of other maybe better options that the serverless frameworks work with. OK we'll go over that briefly and really like the topic overall is this like it's really powerful and I think I do a pretty bad job of presenting that but so.

[00:05:04] First of all serverless is architecture, the cloud provider basic basically the sum of what serverless and most people and you probably know this as a cloud provider runs a server for you. You don't have to do that. They allocate resources and scale and memory and all that stuff for you. You don't have to do that, and you only pay per use per use. I think in AWS is clocked in milliseconds use and milliseconds create a payment. So, you're not paying for idle time. So, like you know there's so many technologies we can layer up that. It's pretty sad like the server where is really expensive two-thousand-dollar amount server that was maybe being hit like six times a day. So, and then and then like you you'll probably hear of this technology is like FAS or functions as a service and the kind of it basically says you deploy your functions in the cloud and they're running off as some sort of like event they're stateless when they're done they disappear.

"...you deploy your functions in the cloud and they're running off some sort of event; they're stateless when they're done they disappear."
[00:06:19] So. So first we're going to approach the mental barriers and I truly apologize for the corniness that we're about to experience because like I said. Yeah, I do. I had a mental block and I was like Man this sucks I hate making presentations of PowerPoint. And so, I just had a little fun and it there it went. So.

[00:06:42] So the idea is mental barriers usually it's super hard to implement and not for mortals only dev ops ninjas dare try it anyway. So, what I think. Yeah. I think stuff like this I like I like I did. I need to pair with Robert, or I need to pair with Thatcher if I ever want to try something that's so.

[00:07:02] So that's usually one kind of mental barrier and just understanding that like well you know it's just that idea that it's a little overstated right. But fear is not really a good option. So, if we're afraid to try something because we think it's hard or whatever don't do that. And so, income services framework and it goes whatever.

[00:08:01] And so the serverless framework as I said I'm pretty much focused on AWS because our client uses AWS that led to these discoveries, as you can see, it pretty much works with everything. It's kind of like stealing candy from a baby or candy from a baby guy.

[00:08:27] So I just want to just kind of briefly just show you like here's what it takes to get it going. First you install it globally. And then you create a project and then you run the project like you don't even have to touch the code to like literally just step one, two, three and then you're going to see some output like it's going to run. It creates its boilerplate like hello world function and you just run it and then you can go into that function and just be like do whatever you want, and it runs it and that's local running. So. So that's how easy it is to set it up.

"First you install it globally. And then you create a project and then you run the project...literally just step one, two, three."

[00:09:07] And then so hopefully that kind of like helps break down a little bit of the mental barrier and saying hey I can use a lambda function pretty easily just by running those three commands and boom I have a lambda function and at the very end of this I'll show you there is a fourth command. That's the fun one. It's SLS deploy that just sets it all up and on AWS without you even logging into AWS it's kind of crazy. So. So then. So, the second point or takeaway I wanted to get into is like the idea of the first idea architecture. The first idea you have when you come to an architecture and this is just really a mental model it's our rule would be following this process. So, asking the question what does design look like if I use lambda everywhere to accomplish all the tasks that are happening and happening in this application. What would that look like. And then just ask a deeper question is performance so important that I should be moving this stuff to the edge to the cloud and CDN because lambda functions now live, they can live right on the CDN and execute right next to your user. So understanding these questions is important and then you can actually whiteboard what this would look like under a completely lambda space like everything's lambda function and obviously that's not going to be optimal but it starts the mental process and then then you go back and say well you know this this thing is not really optimized lambda function and said this function invokes this function invokes this function and I know my latency between waking this function up and waking this function up is going to cause this whole flow to be super long. Maybe that's not optimal. So, you can kind of scratch away things that don't make sense. When you're doing that in the end but then you're left over with maybe an architecture that does make sense and is optimized for lambda. So, it's just coming up with a model of how you do that that I think is maybe valuable to being more thoughtful of this type of solutions.

[00:11:21] How functions work when you walk into this model and you're thinking like what will what is like what's actually causing a function to run on me and I can show you I'll show you in AWS that there's so many different triggers that are available to create a function like to invoke a function it's something funny so that some of the popular ones or interesting ones are like SMS, email. So, like if somebody sends an email like you can have a lambda function that triggers and says Hey, I got an email that's related to support or that's you know like you can kind of scan email and do something special with it. And then you can tap a function into your Alexa smart home so if you're just at home you're like Man I sure wish I could just click a button on my phone and then all the lights come on or something. You can write your own function for that. CloudWatch logs so logging is kind of a big deal. It's not always appreciated because it's a little bit boring. But the idea is you can monitor for almost anything in AWS like you could almost monitor API traffic like what was coming into what routes just setting up CloudTrail and CloudWatch logs and then handle that with lambda functions. Web end points and S3 bucket puts and deletes all these things and many more can be factors that trigger lambda functions.

[00:12:49] I already mentioned a little bit of Lambda on edge CloudFront allows you to put those lambda functions in the CDN and so they're sitting next to users. Those functions now can also talk to a database sitting on the edge. So, like literally you can have this entire application sitting next to the user. And I actually created an architecture that sort of does that that we can just kind of look at.

"CloudFront allows you to put those lambda functions in the CDN and so they're sitting next to users. Those functions now can also talk to a database sitting on the edge. So you can have this entire application sitting next to the user."

[00:13:16] Here's my architecture. So this is like kind of like a message post that maybe like Twitter or something that just basically has three functions you can auth into it you can write a message and you can subscribe to other users to see what their feed or so like Twitter have a feed of messages and so the idea is like OK this this is something that you can maybe use. Edge CDN lambda functions for it. And so, every CDN on the edge like say you're in Ireland. You'd have a function that that is on the CDN that handles authentication of each user. And then one that handles messages like if I write a message and I hit enter and it posts a message it's handling that and its handling subscriptions and then and then there's a local database. It's kind of like handling all that stuff in real time as fast as possible but then you have another function that's like going back to our central application which is let's say US West 2 in Oregon and everything's really like the authority database is here all the all the you know the core business logic which is basically just having these sockets that are sinking kind of updates from each region up to all the other regions can happen there. So, in this case you have a hybrid approach you have lambda functions sitting on the edge and you have a server running kind of at your central hub and that's orchestrating updates from any other region to all the other regions. So, in this way you would have a model that could that users here could interact with users over here but all the information is being pushed aggressively to this outpost so that like as they come up with new interactions it's already it's already there available to them. So that's just a brief example of what can be done.

[00:15:17] Understanding that there is effectively two function types that we would be dealing with. One is asynchronous which is it's kind of like a side effect function like you just you just fire it and some side effect happens in the world. It's not really a pure function. There's no there's no wait time for output and then the synchronous one is like very traditional API like I send a put to my API and I'm waiting for a response and I get the response back and that's like that's how I know that the cycle is closed. So. So most of the lambda functions we're going to be dealing with are going to be synchronous not all but like I think. I think AWS everything is synchronous by default you have to change it. But basically, both of those like do have some performance implications of what you choose. So that's kind of an important configuration but let's see beyond that. Oh yeah.

[00:16:19] Deployment I mentioned the deployment is kind of cool thing. It's pretty easy to deploy this thing that's it. And I will show you what. So, I um I just downloaded a. Somebody's kind of a little tutorial repo on serverless. Like what the heck. You know let's see I've got five minutes before the thing. Let's see if this actually works the way I'm advertising. And so, I literally yarn installed and SLS deployed into my account and sure enough that endpoint is hot like I hit it and I'm getting a response and I'll show you. It just works like I was like, pow. You know.

[00:17:10] All right. And so, at the end of it all the upsides that we're going to be able to deal with here are. We can deliver more value to clients with less work, so we don't have to deal with all the nuances of the infrastructure. You just you just. I'll show you what the serverless repo is basically just update a few things in YAML. You update your JavaScript or your python or whatever and you run SLS deploy and boom. It's amazing the amount of junk that is created in AWS to support one of these functions that you're creating S3 Buckets you're creating API gateways you're creating users with permissions and roles and assigning those to just the right places but all that's happening for you behind the scenes so you get a lot more value done with less work. Clients have more compute work accomplished for less money. Really what you could do in theory or you could do, and I heard somebody do this is like with static web sites you could actually like have a function that just serves that thing once a day into your CDN. Now it's all cached and that's it. It is like the cache is serving the web site all day long. And then tomorrow we warm the cache again and boom. So, we're being charged for one function invocation per day to serve a big static web site and the CDN is like pennies. And so, you could take a thousand-dollar bill down to I don't know 10-20 bucks or something like you can really cause a lot of positive financial benefits by it by doing this correctly. The architectures that you implement are very modular and self-scaling. So, like if you probably remember like the CDN architecture I had it was the same thing, living on each CDN. So, this is like hey create it once and then put it in each place and then you can create other architectures that live right next to it. Now you can build a bigger and bigger application just having these architectures like interact with each other. So, I really kind of dials in on the micro service mindset and then for the developers like we get to focus more on the application logic instead of the infrastructure.

"We can deliver more value to clients with less work, so we don't have to deal with all the nuances of the infrastructure."

I think this is the big danger of this framework is it makes it so easy to build a bunch of junk in like literally and you're going to you're probably going to someday wind up like for instance you could build your entire and what I was going to show you is or I will show you like that. You can build your entire API. For a web application using these lambda functions. So, let's say you did that and now you have an API a kind of endpoints and that's endpoints like. You know I need some more information I should just get another lambda function. Go it goes and gather that and that in points are good. I need another. So, all of a sudden you have this sort of big train of lambda functions in this one endpoint. And the way lambda functions work is that they're basically a zip file stored on the S3 bucket. So, the code is zipped up literally in a zip file and it sits there until somebody invokes it. And then when it's invoked like the server the cloud provider server will go grab that zip file and open it and execute it. So, there's a latency there to grab the zip file open it and execute it. Now what with that said you can like usually after the zip file is opened there is a certain amount of time even after your function execution is ended that the. That that that process stays alive and all the scope all the state stays in scope. So, like and nobody really knows how long it is it could be like 10 to 20 minutes that the function is still there. And so, if you hit it the second time it's going to be much faster right because it's all still alive. And anything that you have saved in state like let's say we opened up a database connection on that first run as long as it's still open that database-based connection is still it's still there. It's still like so you can still use it. And so, it's what some people do. I mean really tricky architecture is like you just ping that function every five minutes keep it hot. So. So that it never goes to sleep because you're not a charge you're not charged by it being up and hot you're charged by the execution. So, you can just stay hot forever like you just keep it hot for like 24 hours a day. But even if it's only being hit five times plus every but like every time you ping it it's hit the charges are still really small relative so you can have a basically mimic a live server. You know it's always up but. And avoid some of the latency issues. But even then, like if you train too many functions there's still latency between one function talking to another even if they're both hot. And so, there's definitely like a lot of performance things that probably you won't learn much about until you actually start doing it. And that's. Yeah. That was really kind of what my response was is not necessarily that you should have known better or shame on you developer. It's that that there's a lot we don't know and there's a lot we're going to learn as we go forward into becoming more proficient at this stuff. And really the only way to do that is to try it and to learn. Get that get that knowledge right now. I would say we're going to be pretty poor consultants on how to use lambda functions because we've never really done it. But how will you ever become good consultants if we don't try.

Thatcher: [00:28:26] I would add one thing which is, it's not just I think the question of architectures. It's how often it gets used. So, we you know we engage client they need something relatively simple done then. This makes it much easier for us to just get in and get right to writing code rather than having to worry about any architecture it's going to cost them a lot less to begin with. And you basically get right into like figuring out the business logic part of it. So if it means it for two years they get to run on a lambda function at a certain point a certain scale it's getting hit so often that it would be cheaper to just have a live server that may be true but in the meantime you've spent almost no time having to figure out you don't have to solve that problem until you actually know the parameters look like. Rather than guessing at it at the beginning so I think being relatively aggressive with can we solve this with a serverless function rather than having to wait an hour and be like oh we need your architecture on this. I think it's good to be aggressive as Ben was saying yeah.

"This makes it much easier for us to just get in and get right to writing code rather than having to worry about any architecture it's going to cost them a lot less to begin with. And you basically get right into like figuring out the business logic part of it."

[00:29:34] So just quickly you kind of like show you guys you the highlights of the AWS stuff that happens. So, first of all serverless framework which is the repo that I deployed is this thing. It basically consists of kind of your source stuff which in it in art in the project that we did. I went ahead and put it like this is this is what comes up when you first. Actually, that's not what comes. This is the bare bones scrap metal that comes up when you first enter in serverless create like you basically get a handler function a YAML file and this YAML file is basically just like telling the world here's my functions and then any other metadata like it. There's basically a ton of configurations you can do that all kind of play with the AWS stuff. And so, like but just real bare bones are like OK. I have a provider I have like I'm running Node 8.10 and here's my function and then the functions here and this is my hello world and I can run that, and it works. So, but then a layer up is like OK now I have a more sophisticated API. I have a source directory I have all this stuff and my YAML file is like a little more configured. It's got this API with a handler and like and like you know protocols that that it's using and like it's a really simple serverless config but what happens under the hood with this is it uses something called cloud formation which is AWS's kind of what we call automation resource automation tool. So, it builds out anything you need in AWS like per code. So, it's like infrastructure by code or something like this. And so, like running this file just as you see here causes this cloud formation stack to build. And like if you look at the stack of stuff that built because of this file and AWS it's all this like every resource you see. I built an S3 bucket. I built a function. I built an API gateway. I configured this stuff to do different things like a I. All these like execution like this is if you can imagine what it would feel like to go into the AWS GUI and do these things one step at a time. It would not only take you eight hours you would make a mistake and it would not work and you would not know why. So then tomorrow you come in and spend another eight hours figuring out why. And finally, you get it to work and you like all right I'm a genius. But in reality, like this does it all for you automatically. Every one of these things are configured into AWS. And so, what I'm going to do and then I'm just going to show you like the punchline to that is the endpoint that was created was right.

[00:32:38] I was there until it changed it.

[00:32:42] OK. So, my It's right here.

[00:32:45] So like when I ran the command to deploy it says here's your endpoint. Good job. You just created the website and then you go hit that end endpoint and say OK, that's OK. Within a minute.

[00:32:55] Oh well what endpoints did I create. Well I created this API that has these, and this is all on Express this an Express server. So, I have this Express server running on a lambda function with these APIs I have a to dos endpoint let me hit that and see what that looks like. So now I have my new endpoint. I'm just going to go to the dos and boom and there it is it works. So, like the response that I got out to dos off of that endpoint, on the Internet is this stuff right here. The stuff that I return on that endpoint so and so I'm going to show you literally what that looks like for me to do that. So SLS remove. This is also very cool. Oh, I mean before I do that. You look like so that's what that's what the cloud formations stuff looks like.

[00:33:44] OK.

[00:33:49] Yeah. Remember the lambda function is just a zip your code is sitting in a zip file and this is this is why when you build a big application in a lambda function you build as big as you want. I use webpack.

[00:34:01] So basically, I will go through and bundle all that junk make it as small as possible and that's your final zip. You just had this in basically you just tell you just tell your serverless file. Here's the folder that you're going to upload as my function. And so, it's like this optimize and using webpack. You can also use jest to test all your or whatever you want to test all your stuff. You can use type checking. We have these functions that are bundled with webpacks tested with jest and they're written in typescript and all you like so you don't lose anything that you already know how to do. You just you just add to it by making it a lot easier to deploy it. And so, here's the function that I deployed. And you know it's got this kind of cool interface. Here's the API gateway that connects to the Internet. I get logging out of the box and then just the function code. What I really wanted to show really quick is like OK I'm just going to run this command. SLS remove. Goodbye. Now all of that stuff is disappearing. It's being deleted and. Right. You're. Yeah. If you if you made an expensive mistake you can get out of it just by hitting SLS remove.

[00:35:27] Yeah. So. So. OK. Now it's gone. So, to prove that I'll just go back to my lambda function interface up no functions it's like here. Create one please and I'll go into this. This cloud formation stack and like look at the look at the history. No history or no current stacks built. It's gone. So, everything's gone. So. OK so literally like a few minutes ago I cloned this repo I yarn installed and then I hit this command SLS deploy.

[00:36:11] Yeah and that's what that's what gave us that end point. And I like to put all that junk. It was that easy. So that's the end of the talk. Questions?

Thatcher: [00:36:25] As you're doing this manually or I should say I was doing this manually and then they found serverless framework which is super easy. And it you didn't maybe one of the challenges we had initially was. It's really nice to be able to dev locally and not have to do deploy every time to test things. And the workaround I figured out was better than nothing but sucked compared to this was a docker container. That's basically just the lambda environment. That's better than nothing but like this I don't know how they do it actually, but you can run it locally and dev to your heart's content and be bad at programming and make typos everything and then when it works then you do the deploy. And so, it makes it really easy to manage the lifecycle of the code and it works in an Azure kind of the same way as I understand. I haven't tried it but it's sort of ubiquitous like that easiness is not just limited to AWS.

Audience: [00:37:30] Are there any competitors to serverless that you looked at.

Ben: [00:37:35] Well I mean like terraform has their own sort of hey build lambda functions but serverless they're like laser focused on serverless that's how it works. So, I don't know. I'm sure there's other like there's probably a lot of money in it. So, there's probably other people trying to become relevant. Maybe they already have and I'm just not aware. How do they make money? How does serverless make money? So, they have enterprise junk that they sell and consult. I don't know why they're not charging they're not charging us. The common man right. Yeah. Maybe they even offer their own cloud. I don't know. But like OK it's done so now that that functions back. I have my. All that stuff done. Like you know I mean you know I just took 16 hours of work and condensed it into one two word one two-word command. SLS deploy.

Audience: [00:38:35] You can get it down to one. Going back to the Azure thing. I was playing with this. It's been a little bit over a year ago now, but I was deploying to just for fun deploy to AWS and Azure in a way that would have been transparent to users like I deploying to multiple clouds and the application you know could be hosted by either. I ran into a problem that you must not have had so much was there that you run into troubles integrating typescript because I was really struggling with the weird typescript modules and stuff but like I say it's been a little bit. Did you use webpack? I was using webpack. I must've been doing something wrong because I was really struggling with getting typescript integrated.

Ben: [00:39:21] No I mean typescript was actually it maybe that's just an evolution thing. Everything's moving so fast forward, tooling’s better. It was actually really easy to implement typescript and then you know like really any anything I wanted to like layer into the software I was writing. It was it was not a problem just it works like the repo works. But it worked like just any JavaScript repo really, it's just JavaScript. And the only thing that serverless cares about is this YAML file. Like how you write this YAML file matters. It's really important like you can you can this this system has access to as I understand every resource that CloudFormation can touch. You can touch in this YAML file so you can set users you can do anything in this like YAML file. So, this file is important and then you know if you have a dist folder where you know where your package your distribution. There's a place there's a place in this YAML you just say yeah. Here's my functions in this dist folder and that's it. Everything that happens outside of that dist folder is like your world is your spear. Serverless doesn't care about it. So as long as what goes into that dist folder works and compiles and it's valid JavaScript it'll work.

Audience: [00:40:53] So two really quick questions. I haven't seen you do any sort of like secrets. I don't get how it's associating anything that you can count.

[00:41:04] Sorry I definitely skipped that part. So, if you want to do this on your own you have to go create an account, AWS account create. I would recommend create a user in that account. So, like I have I have a good to my IAM and stuff and you have this user’s management portal like. Create a user and create a user that's like a dev, like a programmatic user and then that'll give you a secret key and access key and then those keys need to be configured in your AWS CLI so you then you have to install the AWS CLI on your terminal on your machine and then and then when you do that you have to like in order to be able to run commands against your AWS account you have to use that user once that's in there and you're CLI you can come in right here and just say like CLI you can have different profiles like I can have my GenUI profile I can have like all these different profiles with different keys pairs in them and so I can just say you know I want it today I'm deploying to this other account I'm going to deploy these functions to GenUI.

Ben: [00:42:19] So I just type that in right there and now that all that will go to my... This integrates with the CLI and it'll use the credentials for that profile for that user. So as long as you give it like this kind of super user power no matter what you put into the YAML file it will it will execute what a lot of more sophisticated places will do is create a user that only has access to provision things that are in the scope of your of your function which is not trivial like we saw like all the stuff that's being provisioned like that. You have to really serverless has suggested user permissions to get all the CloudFormation stuff, but they just claim that this is not complete. This changes all the time and I did I created a user with the stuff the permissions that they said and I got errors because stuff was I had to go in and add stuff that was missing because things were already outdated they don't maintain that but you can create one yourself and maintain it.

Ben: [00:43:24] It's probably best practice actually.

[00:45:17] Is there such a thing as like serverless tests sitting next to the serverless code and how you think about that.

[00:45:23] It's an incredible question.

[00:45:25] So you know one of the hard things about AWS lambda functions and probably any cloud provider lambda functions is these functions are written to be in a context that's like extraordinarily unique like the context being integrating with cloud watch logs or integrating with an S3 bucket or integrating with the Kinesis stream or integrating with Alexa smart home devices and that context is almost impossible to consistently replicate on your local machine or any other place to get to get dev work done. So really the only true place to test the lambda function is in context. So, you like you can you can invoke local here and like. Here I'll show you. So, if I go, I think it's NPM run dev OK so now I've created a local context here.

"So really the only true place to test the lambda function is in context and you can invoke local here to do that."

[00:46:24] I didn't. All right.

[00:46:29] Oh because the port already in use already. I already did it that's why. OK. So, I had already dinking around with this. So, I did it again NPM run dev.

[00:46:38] And so what it does is it runs a that API gateway that actually installed in AWS as it runs like a mock version of that. And you can actually hit that endpoint locally using like a little you know service like http.

[00:47:21] If I were to go like http the 4500 port and then go backslash to dos.

[00:47:32] There it is. So, I get the same responses I would've gotten on the Web. But the problem it is like a lot of the triggers that you have a data package or a payload with them and really you can if you go into the lambda function itself you can actually match function should be back.

[00:47:50] Yeah. OK to go to the function.

[00:47:53] Hey guys OK.

[00:47:59] I just want my easy button. There it is. All right. So, you don't know the function itself. And there's this test button and then this test button basically you can see all the things that you can set up as triggers like all these like services can be potential triggers for your lambda function. And if I were to say I think I want to do a proxy Gateway and boom it's going to tell you this is what the input of that this is like when the function is invoked.

[00:48:28] And this is that object. So, I can just grab this copy paste it my local thing and when I invoke that locally the function locally, I can actually pass in the file that is this object and it will simulate. OK you've got this data. So that's one way to do it to test locally is like you can just come in here grab their template and maybe modify it to be more specific to your needs. And then if there's like another file an action like you can just grab it and.

[00:48:59] But it's never going to be the same as testing. Right. Right.

[00:49:06] Yeah. Oh, definitely like you use this pattern to write a lot of tests like if you have functions you write test if you if you want to test like how it handles the event object you would write test around this kind of stuff.

[00:49:20] You certainly can write a significant amount of testing but like testing how it works in context can be very tricky depending on what you're trying to accomplish. Even like using your CLI we could write a function that actually published stuff to AWS, like to a Kinesis stream that was live, like you can do a lot of interesting testing but still it needs that context. And really. So, when you when you launch a function you can say stage dev and so like if I go to the function file like look at the title of the function it says serverless dev API. So probably best practice is to like launch a dev version mock out all the do have in dev environment that's like creating these and then you can go into CloudWatch and or go into monitoring and view the logs in CloudWatch so every time that function is invoked like I can see if you console log in a lambda function the beautiful thing about this is that job gets sent to a different server. A different process that you're not being charged for and that console log job is actually put into this CloudWatch thing so you can log all over the place internally in a lambda function those logs and you could structure your logs and however you want those logs will be shipped effectively for free, to Cloud Watch like there's no compute cost. Whereas if you're running your own server and your console log and you're building logs like that's your server paying for all that.

[00:50:54] So in this case like we could log all we want right now there's no logs because we're not really there's a log stream. So, there's no logs. But like if I were console logging in that function like I'd get like all those logs right here for free and not at no penalty to my execution time because that job just gets pushed back and that's it.

[00:51:27] Go get it done guys.