02/12/26 - Updating Cache-Data Using Kiro Part 2 - Dev with Me [Video]
In this episode of Dev with Me, I continue building out version 1.3.8 of 63klabs/cache-data by tackling GitHub security fixes, migrating tests from Mocha to Jest, and implementing API request pagination with retry logic — all driven by Kiro's spec-driven development workflow. I walk through how I craft detailed requirement READMEs, use a SPEC-QUESTIONS document for async Q&A with Kiro, and review the generated requirements, design, and task documents before letting Kiro execute. You'll also see how I protect important code decisions from being overwritten by AI using a custom block-quote comment convention.
The second half gets hands-on as I spin up two serverless applications using the 63Klabs Atlantis Platform — one acting as a paginated endpoint and the other consuming it through Cache-Data. From creating repositories and seeding starter code to deploying pipelines and getting both apps talking in the cloud, it's a real look at the iterative process of building and testing infrastructure. Next time, we'll wire up full pagination testing, introduce chaos for retry validation, and explore Kiro's predictive coding capabilities.
- Feel free to bump up the playback speed
- Turn on closed captions
- Jump to specific topics using the timestamps below
- If the screen is blurry, make sure you're streaming at 1080 HD
AWS is not a sponsor.
Chapters
- Intro & Recap of Version 1.3.8 Progress
- Migrating Tests from Mocha to Jest
- GitHub Security Scanning & Recommendations
- Spec-Driven Workflow: Requirements as a Readme
- Starting the Security Fixes Spec
- Using Spec-Questions for Async Q&A
- Reviewing Requirements & Design Documents
- Task Execution & Reviewing Diffs
- Block Quote Comment Convention for Decision Preservation
- Protecting Important Comments from AI Rewrites
- Security Fix Tasks Complete & All Tests Passing
- Spec-Driven Development vs Vibe Coding
- Starting API Request Pagination, Retries & X-Ray
- Pagination Options & Configuration Defaults
- Using Working Sample Code as Requirements
- Retry Logic & When Not to Retry (400 vs 500)
- Reviewing the Pagination & Retry Requirements
- Clarifying Design Docs with Kiro (Enabled Flag, Max Retries)
- Ensuring Jest-Only Tests During Migration
- Updating Steering Documents for Migration State
- Pagination Implementation Tasks Complete
- Setting Up Test Applications with Atlantis & SAM
- Repository Creation & Starter Code Seeding
- Dev/Test/Prod Branch Workflow & Pipeline Setup
- Cloning Repos & Opening Projects with Kiro CLI
- Deploying the Pipeline & First Application
- Creating the Second Application (Endpoint)
- Connecting the Two Applications
- Modifying the Endpoint to Return Raw Game Data
- Deployment Failure from Broken Tests
- Iterating on the Endpoint Response Format
- Kiro Code Completion for Generating Game Names
- Both Applications Talking Successfully
- Next Steps: Pagination Testing, Chaos/Retry Testing & Packaging
- Outro
Transcript
Intro & Recap of Version 1.3.8 Progress (00:00)
Welcome back to Dev with Me. In the previous video I started updating cache data and working on version 1.3.8 and in the previous video I was basically just doing some migration.
Migrating Tests from Mocha to Jest (00:23)
I've been working on migrating the tests that are currently for Cache-Data for Mocha over to Jest. I'm working with Kiro and Kiro likes Jest over Mocha for these type of projects because of the ability for it to do AWS resource mocking and stuff like that. There's better integrations for that. So, since this project deals heavily with a bunch of production code, a bunch of AWS resources, we're working on moving some of the testing functionality over. That way we have a 100% test coverage.
Test coverage was lacking for Cache-Data in terms of AWS mocking. It just wasn't there. which hasn't really been too much of an issue in the past because it was well tested in terms of just running in production for so many years. So, if you don't know what Cache-Data is, I definitely invite you to go to the GitHub repository for Cache-Data to kind of catch yourself up on it.
It's not so important that we're working with Cache-Data here. This is just another video about me going over my workflow. So, I'm not going to talk too much about what Cache-Data is and all that sort of stuff, but I'm just going to focus on my Kiro workflow here for the next phase of getting version 1.3.8 out. And that is enhancing some security fixes or implementing some security fixes.
GitHub Security Scanning & Recommendations (2:18)
There aren't any vulnerabilities currently detected in Cache-Data. When I commit my code to the GitHub repository so that it can then be you know published to NPM, GitHub runs code analysis on what I check in and provides, you know, security recommendations for my code. Nothing came back for the code itself that actually runs Cache-Data, but there were some fixes that were recommended for the tests for the Jest tests that run in... well, I don't even know if they're Jest they could be property tests and stuff like that, but ba basically it's within the testing framework that that needs to be updated. Nothing in the production code. It's all about the tests.
Spec-Driven Workflow: Requirements as a Readme (3:18)
So, there were three things. Two of them I think are pretty much very similar, very related. What I did here, and as I explained in my previous videos, my workflow with Kiro is I spend time... maybe an hour or two, maybe even over a two-day period or so, coming up with my specification or my requirements. And I don't want to say requirements because I mean they are requirements, but it's not the requirements document. I want to make that distinction. There's the requirements in terms of all the notes, the links, maybe API specs, code snippets that I provide in a README document essentially, that I place within a directory underneath specs. And I have a certain naming convention that I use. I go over this in a previous video, but basically I'm working on version 1.3.8. So, I name my directories a certain way. That way, they're in chronological order. They're easy for me to reference. And then I put all my requirements not in a prompt, but actually in a document. That way, I can be somewhat elaborate. And this is not a requirements document. This is just... it's essentially a large prompt. So I'm able to go into some detail. I basically copied pasted in what GitHub gave me for the security reports linked out to it and we're going to start from here.
Starting the Security Fixes Spec (5:12)
So, typically what I do is I let Kiro know.
All right. So, basically I said, let's start on specs 1-3-8-security-fixes-README.md. It's a list of requirements that I want a spec-driven workflow for. Usually I also give it some information about if you have any questions put into a SPEC-QUESTIONS document. Oh, which it already did. Awesome. So what the SPEC-QUESTIONS document is and that's not a Kiro thing. That's something that I came up with just like this whole README thing. That's something I kind of came up with in which instead of just a prompt and trying to put everything into prompt, I put it into a README. And again, it's just kind of messy. It's notes. It's stuff that I wanted to do. It's not full on requirements. Kiro will create the requirements document based upon the answers that I give it. I also ask that if there's any questions for me that Kiro has instead of asking-- because typically it will ask you know like in a bulleted list or in a numbered list you know hey I have these questions can you please respond then I got to respond to them in the prompt. I don't necessarily like that. I like to take my time especially if it's something that I don't have an answer to. I got to go to somebody else for or I got to do some research. it's just nice just to have it within a SPEC-QUESTIONS document.
Using Spec-Questions for Async Q&A (6:42)
So instead of having Kiro ask me the questions in the document, I have it send it all to the SPEC-QUESTIONS document. Another thing about that is instead of just giving me a list like a numbered list of questions, Kiro also has the ability to then go deeper into its questions as well.
This is a bit of a different format than what it has given me in the past-- in the past. And a lot of times it just depends on what what kind of project I'm having it do.
So, it is asking some clarifying questions. GitHub code scanner identified three security issues. Yep. it's just stating what I already know.
Questions for me. Scope and priority.
So should we fix all three issues in the spec or would I prefer to do each separately?
All right, I've answered the questions.
Okay, it's going to go through... and work on it. I'll come back once that's done.
Reviewing Requirements & Design Documents (7:58)
Okay, so I have reviewed the requirements. Everything looked good here. It wasn't as long as I expected it to be, which is great. I was worried about when I was going through the SPEC-QUESTIONS, I was saying yes to a lot of additional work, but when the requirements came out, it wasn't as much as I had thought, which is just great. The design document, again, I reviewed it. It looked great. It was going over some code snippets and stuff. Looked awesome. And I am having it create the tasks now. It looks like that is done as well. And it goes over. It goes over each of the tasks that it will perform. It looks like it came up with 12 tasks, which is typical for a project like this.
Task Execution & Reviewing Diffs (8:58)
So, I'm going to have the task start. And as always, I have a-- I have the repository here open. So, I can actually like see if anything gets modified. And nothing within the scripts section, which is the actual production code, should really be touched. I'll review the tasks for when it actually gets to that to, you know, kind of look at it, but, for the most part, everything should be within the tests and the audits, which is not like production stuff. That's just like internal testing and stuff. So, I'm not too worried about that. As always, I can always review the changes, but a lot of these are well, this one is fix a script. So, if I view the changes here, I'll actually then-- in the past, a lot of what I was creating was like brand new documents. So, like the whole diffs was like a whole bunch of differences. But this is nice that I can actually go and look at what the differences are in each of these.
Block Quote Comment Convention for Decision Preservation (10:09)
It is using the command notation-- or the comment notation that I've been directing it to use. And this is this is just something I came up with. This is no standard or anything. It's just basically based upon the fact that in markdown a block quote is a angle bracket and those are usually used for like important notes and stuff. So I use that in markdown to note anything that I specify as a as a developer that I don't want removed. If AI goes through and revises the document, it should it knows to retain all that. And if it comes across anything that contradicts-- because maybe a new feature was added or or something changed and that comment's really out of date. If it detects that, it'll prompt me and let me know, 'hey, I came across this. I'm not supposed to be changing these things without letting you know. What do you want me to do?.' I have a spec-- I have a I have a steering document that basically goes over that details-- that detail.
Protecting Important Comments from AI Rewrites (11:17)
So, this isn't anything, you know, that's a standard or anything. Carrying over that block quote from markdown. I just basically have a greater than and a exclamation point just just to kind of note that this is important. And again, these are comments that AI or I added that are important. Don't delete them. they're-- because it's not necessarily a functional comment. It's not saying what this does. It's basically explaining a decision. We made this decision because whatever it's-- like well you-- somebody might come in and they might look at your code and they're like 'well they're just using a for loop why aren't they using mapping maybe there?' It was a decision that a for loop was used instead of, you know, a mapping function and that can be noted here that way it's deemed: One it's not just functional comments it's actually a reason why we did something it's important and not only is Is it important? It with this it tells AI and humans, hey, this is a decision. Don't just delete this comment. Don't just change it without ensuring that it's either still in play or it's outdated. So, because yeah, when AI is rewriting your code and you have an important decision or important comment, you don't want it to accidentally remove it.
So, this is-- I'm just going to do that. And I'm going to do this. Where'd that go? So, I'm just going to let it do any check.
Okay.
There.
Now, the reason why it prompted me for this because this was a different document. I just-- what you just saw me do here is just basically let any document. So, then it should be able to do that from now on. All right, we'll just let this run and I'll come back. Oh, it's finishing up, I would assume.
Security Fix Tasks Complete & All Tests Passing (13:45)
Okay, good. Might have had a failing test. So, So it looks like the test ran successfully and security fix is working correctly. The test that validates JavaScript code examples passed. That's awesome.
So, it made the fixes.
Spec-Driven Development vs Vibe Coding (14:33)
All right, good. And and again, I've already reviewed all these tasks. I can't stress enough how important it is because this is not vibe coding. This is spec-driven development. So, as such, you should always be reviewing. the majority of your time is one gathering all the pieces you need for your requirements prompt and two reviewing everything that Kiro produces. Whether that's the requirements document, the design document, or the task document, always review it. And then, of course, review the final output. So, I was able to finish up all the tasks for the security fixes and everything's passing.
Starting API Request Pagination, Retries & X-Ray (15:27)
So, I've now completed for version 1.3.8 both the security updates which applied some fixes to some tests as well as another migration of moving Mocha to Jest now what I'm going to start with is the API request pagination retries and x-ray. So I created a README for this and basically this is an example of me actually providing a bunch of sample code and not not even sample code this is code that I've been using to facilitate pagination and retries in essentially production and what I would like to do is instead of having to code this for every single application move it into the API request object that cache data provides. That way it could just be used from there and it doesn't need to be coded for every application. So just make it native within cache data. So I basically took the code that I had to facilitate all that and it's working and I was able to ensure especially with pagination there's different formats that various endpoints might use. So I made sure that I was flexible.
Pagination Options & Configuration Defaults (17:12)
So right here I have a endpoint for the Acme company and this endpoint is getting rocket information. There's another one for skates. So basically we have the ability to have pagination options here. Batch size default limit is 200.
I'm wondering if I should just make that 100 because I feel like that's kind of standard response label and the default. So like these are pretty typical in terms of you know the limit and this one here is called 'take' that one 'skip' items 'total' items somewhat standard take usually isn't like that usually it's like limit so I'll be sure that that's limit as a default.
But yeah, so basically these are the options and then to implement it, I actually have a parent class that actually implements it. It brings in the options, it uses them and it goes from there. So yeah, the default is 'items' 'offset' 'limit'. And what did I have over here?
'Items', 'offset' is 'skip', and 'limit' label is 'take'. So basically these are overwriting. So this is an example of it overwriting the standard default but with the requirements that I came up with, I have this listed and I link to the classes that I want it to use as an example and they are working examples so they're implemented in production but so they can be replaced across other projects we need pagination in to move pagination into API request be sure to review how the files and methods are related. And then it's got the pagination options should be continued to be used. the results should be moved to API class and all that sort of stuff.
Using Working Sample Code as Requirements (19:45)
So basically this is an example as to I created working code outside of Kiro by hand or maybe some other AI process. And now I want to implement it into my project. So maybe I, like, pagination was kind of like 'how do I do this?' 'How do I even specify this?' 'And how would it even work?'
I could do my work outside of this project. create like a little test project or whatever that and and then define it and then make it in a certain way that I could just then just drag and drop that code into this project and then basically have Kiro reference those as reference code essentially as to how it's done and then go through and implement it. Why might I not want to do that in Kiro?
Well, I certainly could. I certainly could create it from scratch. However, with the pagination, there's a lot of logic involved and so I need to get my requirements just right in order to do all the ifs, ands, and buts, you know, in terms of logic. Kiro could do that and I could certainly iterate with that, but it would be very hard for me to specify all the requirements if even I don't even know how it works. So, it's one thing to come to Kiro with a set of requirements, but if you're not even sure how something would even work, that's just very hard to write requirements for. So, it's kind of maybe better to go experiment a little bit and then come back with like a definite set of requirements and that's what I have here. And my definite set of requirements are basically code.
Retry Logic & When Not to Retry (400 vs 500) (21:47)
Also, I added the retries as well. Retries that that's just simple just in case you know it tries to connect to an endpoint and it fails for some reason network error or 500.
How should we do retries or should we just not and if we do, how many? You know, one retry so like it fails and then do one retry so that's total of two tries. Because, sometimes, you know there's just like some network hiccups.
We do have a-- I have or I should say I've worked with a vendor with an endpoint where sometimes the first fails, but then if you do a second one soon after, it'll work. Weird. So just to make it a little bit more robust, we allow retries.
I specified that the retry should not happen if it's a 400 error. 400 errors typically it's not found or there's an authentication error. You're not going to get past that because you need to change. In fact, that's what a 400 error--that's what most of them are--pretty much stating you sent the request wrong or you had the wrong information in the request whether it's credentials or whatever and unless you give me a different request I can't help you. So since we're doing a retry on the same data, there there's no point. So it should only do a retry if there's a network issue or 500.
Reviewing the Pagination & Retry Requirements (23:26)
All right. So this is the document I gave Kiro. Again, I specified if there's any questions, ask me. It had no questions to ask me.
So I reviewed the requirements. The requirements are very nice and neat. This is where it just took my code because my code was correct. And I only say it was correct because it's been in production. So, it's been proven correct.
Coming up with this logic would have been so hard for me to do just without writing a sample code. So again, sometimes you just want to write some sample code to just kind of figure out and you don't need to do a whole project, you know, just like a method or something that's maybe complex. Just write it out as code, make sure that it works.
If you're a Python developer and you want to write it in Python, but yet you're working in Node, you probably could do that and then just say, "Hey, this is written in Python." You apply the same logic but port it over to Node. You could certainly do that.
So basically very short document. There's like what three requirements. Requirement one is pagination support. Requirement two is retry support and then requirement three is enhanced x-ray subsegments.
I've been having an issue where the subsegments on the retries and stuff don't X-ray is basically a level of monitoring in which you can see a path of request and stuff like that and if it fails or if there's any issues or latency and stuff like that you can like track that and for some reason if it goes out I that'll be logged but if it has to do a retry or pagination a lot of that doesn't get logged for some reason and I don't know why.
So, I want I don't know how to enhance it, but I assume Kiro does. So, I'm going to make sure that Kiro can enhance that. That's also going to be one of the things that I'm going to try to test to make sure. And I'm depending on how Kiro did, I might have to like go back and say, "Okay, yeah, I'm not getting anything here." So, this should be fun.
As a developer using the ex existing API request class, I want all current functionality to remain unchanged. This was key. This is in production. if anybody's not using pagination or anything like that, or even if they currently are, but they implement it on their own, this should not change the way that the current functionality works. We don't want any breaking changes.
As a developer implementing a DAO class, I went to configure pagination and retry options per endpoint. That's-- that's exactly how Cache-Data works in the way that you describe the endpoints to cache data. So that way it knows how to use it which is perfect. And then it's going to add some metadata information. So basically some right now Cache-Data gives back a very basic response. It doesn't really give much information back but it's going to give some information back here. So it doesn't really matter.
Clarifying Design Docs with Kiro (Enabled Flag, Max Retries) (26:51)
The design document. I had Kiro make a few changes. So, I was reading through this and when I got to this part, I was kind of like--
Let's see here.
Yeah. So, enabled.
So basically how Cache-Data works is that you provide it the method like it's a GET request let's just say or a POST request the URL or if you don't provide it the URL you can actually give it the host the path and parameters or you can just like give it in a direct URL there.
And you can provide it options just like you would a any fetch but then you can also add in now pagination and retry information. There's a bunch of defaults here and you can provide pagination is enabled, true or false. I was kind of like, well, if you include pagination as a property, to me that's almost like it should just be enabled.
But there-- I kind of felt then, well, what if you just want to use the defaults and you want pagination. I mean I guess you could just do pagination with empty and just accept the defaults. That way it's true. But what if you wanted to change the logic for some reason for the way you were implementing this?
Maybe you wanted to flag a feature flag or something like that. Or, maybe, some requests you wanted pagination on and some you did not. So, Kiro came up with this whole enabled thing, and I was almost going to take it out, but I decided to keep it. So it's enabled by false.
So, that basically means that if you're going to be using it, you got to send the pagination object anyway. So, you might as well just have a property value of false if you're not going to change any of these other things. So, we'll just leave it as is.
But if enabled is true in this here, I'm looking at this. If that's true and that's passed, but no other pagination properties are passed, does it use the default values? Because as I was reading through this, it didn't really specify that. So I was wondering, you know, if we only passed this with the object, but we didn't pass any of this, does it use the defaults, or is it expecting all these? Because there was nothing here that said it was optional.
This is optional, but were all these things optional. So then it's just said, "Yay, excellent question! Yes, that's exactly the intent. Let me update the design to make it clearer."
So it just made it clear, which is good. so it came up the whole merging idea.
My next question was for max retries that number does not include the initial try. Correct? For example, if max retries is one and the first attempt fails, there will only be one retry after that. Right?
And it said "Yes. Excellent clarification! That is correct! Max retries is the number of retry attempts after the initial attempt. So max retries 1 means two total attempts, one initial, one retry. Let me update the design to make that crystal clear."
So basically it went ahead and it updated the design because again it wasn't listed here. So just for good documentation purposes that way if anybody goes back to review how this was implemented they can see that and it's just explicitly laid out. It also helps with Kiro just to make sure that that is implemented correctly.
Ensuring Jest-Only Tests During Migration (31:18)
It asks--oh, I ask because I'm looking through this and I just did a quick search of the requirements and the design and at no point does it ever talk about creating Jest tests.
It create-- talks about creating tests. It doesn't say Jest and remember I'm in the middle of migrating from Mocha to Jest. The first piece of this 1.3.8 was to up was to move everything that I was going to be touching here that's currently written in Mocha to test to move it over to Jest. so that was one of the first things that I did before I did this piece. And I so I asked are all tests written in Jest?
And it said, "Good question! Looking at the existing test structure, I see that the current API request tests are in Mocha. According to AI context, the test framework is Mocha with Chai assertations. However, I see Jest tests in the repository. Let me look and check."
And then it goes on to say, "I see that there are both Mocha and Jest versions of the API request tests. Let me check."
And then it goes, "I see the repository is in a migration phase. There are both Mocha and Jest."
Now it goes on to explain and then it wants to ask me you know to clarify how do I want to proceed. I chose well just create Jest only test because this particular function has already been migrated to Jest so use just but this brings up an important point.
My specs or my steering documents as well as the AI context talk about Mocha and Chai. Obviously, I have not gone back and updated those documents yet to ensure that it's in a mig-- to ensure that it knows that it's in a migration phase.
So what I'm going to do, I'm not going to spend time doing that. I'm just going to have Kiro.
Okay, so I just created a um-- Okay, so this is going to be interesting.
Um, all right. So, it's waiting on this one here first. So, it's not that I can just like go over there. So, I'm going to let this complete. Once this completes, I will go back over here and hopefully this will complete.
Okay, so the task list has been completed and now it's going to do this and we're just going to do something kind of like not part of the spec-driven thing and I wouldn't say it's vibe coding. It's just kind of a in between thing. It's just a little maintenance that we want.
Updating Steering Documents for Migration State (34:40)
Okay. So, it updated--
basically it went through saying that it's in current migration.
And if I look at the steering documents, it updated those two as well as the test requirements. So, I think we're good here. It now knows that it needs to do Jest all right. So if we look at the task list, we're going to add the configuration defaults and merging logic. Add default pagination configuration object at default retry. Implement configuration merging ensure nested retries. How many tasks do we have?
- Yep, that's about right.
Okay.
All right.
Pagination Implementation Tasks Complete (35:40)
We're going to let this work and then I'll come back.
Good morning. so Kiro finished up all of its tasks. All the tests ran and everything looks good after reviewing it.
Setting Up Test Applications with Atlantis & SAM (35:55)
What I'm going to do next is I'm going to actually create a application that tests it.
Now I'm going to have to create two applications.
I'm going to have to create an application that has an endpoint that uses pagination. That way I can test the pagination and maybe I can create an endpoint maybe even the same endpoint that might occasionally throw in you know a connection error or something like that or some you know bad data that would then trigger a retry.
So I'm going to be creating two functions. One will be using cache data to access an endpoint. the other application function will actually be the end point. So to do that I deploy applications using the SAM application model and I to do that I actually have a configuration repository.
I-- I've developed a collection of templates and scripts for the serverless application model that allows me to easily have a collection of various templates for various types of applications and CloudFormation stacks that I deploy to support my applications as well as a bunch of scripts that I use to then manage my SAM config files and deploy my infrastructure. So I'm going to be using the Atlantis configuration repository for serverless deployments using SAM. I already have this installed in my account.
So I can just go here.
All right.
Now there's only one branch to this repository. So it's the main branch. There aren't additional branches because everything's here. The nice thing about having everything in a single repository-- now these aren't my applications. This is my cloud infrastructure. So this is going to be scripts that I use to create repositories, scripts that I use to deploy S3 buckets or networking infrastructure or even pipelines for applications. So this is separate from my actual application repository.
I could create another video about how to use this at some point, but for now, just take it as a this is how I'm how I deploy applications and you can certainly explore this on the 63K Labs GitHub site and kind of see how it works, but this is just kind of like a demo and then we'll create an application.
So, first I'm going to start up the virtual environment the Python virtual environment.
Took me a long time to memorize that.
Repository Creation & Starter Code Seeding (39:20)
Um what I'm going to do next is I'm going to create a-- um repository.
I'm going to give it a name. Let's see here.
Going to call it CD. no. cache-data.
I'm going to call it cache-data-testing-application.
All right.
Okay so I ran that command and that's just one of the scripts that I have available.
I have multiple scripts available some of them deploy infrastructure and you'll-- you'll see all that.
So what I have here now is I want to create a repository.
It's going to ask me what-- "do you want me to put any code into it?"
So I could choose "none" like no I just want a blank repository or what I really want to do is I actually want to grab code that's already created.
Now the neat function about this script is that I can-- 63K Labs has code that I manage as starter code to get you started with application development right off the bat. And you'll see what that looks like in a moment. What a starter application looks like.
You can also seed a repository from other GitHub repositories, which is a scary thing to do if you're just blindly accepting a repository as being good, legit, and whatnot. But if you if you own the repository or if you're familiar with the repository and you know that it's on the up and up then you could actually seed a repository using another GitHub repository. So but I'm not going to do I mean I'm not going to do that. I'm going to use one of the 63k labs.
Now I have a lot of 63k Labs here. this is the official S3 bucket that I can pull things from and it's a bucket that's that anybody can pull things from. but there's also a development instance. So, kind of like a testing instance with the with a Z rather than an S. so we're not going to pay attention to any of these down here. Those are-- I mean there's nothing in active development right now, but we're going to ignore all of these. Your list won't be as long as this one.
So I'm going to go and do number 3 which is an API Gateway Lambda Cache-Data NodeJS function. So it's using Amazon API Gateway Lambda, Cache-Data, NodeJS.
I'm going to do number 3.
I'm the owner.
Okay. Now, the owner and the creator. Basically all that is is a, I mean, I guess I didn't need to put in email addresses. I could have just put in my name.
Those are just tags. So when I create repositories, it's always helpful to have tags. that way, especially if you're a member of a large organization and if you want to specify who the owner of the repository is, that's always helpful. So these are pre-established tags that I've already that I require of my repositories. That whole thing is configurable by your organization.
My organization, myself, configured those four tags as being required and I'm not going to add any new tags.
Dev/Test/Prod Branch Workflow & Pipeline Setup (43:35)
It's going to create the repository and it's going to do three-- it's going to do a few things when it creates a repository.
One, it creates a repository.
Two, it actually create creates three branches in that repository.
Why three? Well, this uses a kind of a workflow model in which you have a dev branch which is all your daily development work. And once that is working, you know, unit test, stuff like that, test it on your local machine. You commit your code, you know, periodically to the dev branch on the remote server. Once things are ready to test, you then merge your work into a test branch. The test branch is going to be configured to automatically trigger a code deploy pipeline that will then deploy the code. So as a developer, you don't need to do any uploading of a zip or SAM commands to deploy.
All you're doing is you're committing your code daily to dev. When you have something that should be functional and you want to test it out on the live cloud, you can merge it into test. It will automatically deploy--That'll be great.
And then, once everything checks out on test, you can then merge it into main, production, and then it will-- another pipeline will pick that up and put it into production. So you have two instances. You have a testing instance and you have a production instance. And it's always good to kind of follow that workflow.
There is the ability, of course, to add any number of branches, any number of pipelines that you want. Suppose you want dev, test, beta, pre-prod and prod, you could certainly do that. and you could add pipelines to each one of those.
So, it created the repository. Now what I'm going to do is I am going to clone that repository.
I'm not going to clone it here. This is my SAM Config, but I'm going to clone it.
Should already have Kiro up....
Cloning Repos & Opening Projects with Kiro CLI (45:44)
All right. So, I'm in my labs account directory and I'm going to clone.
So, it's cloning it.
Now, a cool thing about Kiro is you can actually do 'kiro (dot)'
Oh, I don't want to do that yet. I want to go into that repository.
All right. Now, I'm in this repository folder, but over here I'm not in that directory. So, I'm in a different directory in my CLI than I am in this here. I can fix that by just doing 'kiro (dot)'.
It opens it up in a new window. and I'm in the directory that I was here.
VS Code actually has the same thing. So like if I wanted to go into the cache-data, I could do 'code (dot)' and it will open it up in a new window.
Very nifty, very cool. that's very nice when you're like working with multiple directories. I always like to keep my directories at the at the root of my project and one window per project that I'm working on. That way I don't have a multiple directories showing up here. that way I can keep things straight and that I always know that the commands that I'm running are for that directory.
So, if I go back to Kiro right now, you'll notice that I only have one README. So, let's check out or switch to dev. And now you'll see that that's where all my code is. Nothing's in main, nothing's in test. It's all in dev because I haven't made any changes yet. Let's go ahead and I'm going to actually merge my changes from dev to test.
So, I'm going to go over to test and I'm just doing this without making any changes to the code that was automatically seeded into the repository just because I just want to ensure that the pipeline works and that the application deploys. It should, it it's fresh. That way if there are any issues I can troubleshoot those issues before I'm actually like changing code and introducing possible other issues. So get I'm going to 'git merge dev' going to bring all that in 'git push' to test. So now all my code is in test. Now I don't have a pipeline yet so I need to set that up.
Deploying the Pipeline & First Application (48:46)
Okay. So now I'm going to create the pipeline.
All right. So, what this is is basically configure or 'config.py pipeline' because I'm creating a pipeline, acme xme, however you would want to pronounce that. That's basically a namespace that I have permission to create in. There's a whole bunch of IAM policies basically that that that in order for me or even the application or even stack or CloudFormation to create resources to have access to modify resources there's a naming convention that is used to provide IAM policies and permissions so that's basically what that is. I'm creating a 'test' instance and I am using my 'labs' profile to do that.
Oh, this is why I like using one window per thing.
All right, let's try that again. All right. Yes, although I know nothing changed because I just did everything. So, just like you saw with the creation of the repository, I now have the ability to create different templates. Now I chose that I was creating a pipeline. So it specifically shows me templates I have available for pipeline.
There's other templates for creating a network stack, you know, like a CloudFront distribution and Route53. there's also storage for like creating S3 and stuff like that, but here is the pipelines. Now, you're not going to have this many as just like with the repository. I have access to both the public 63K Labs as well as the in development 63K Labz stuff. So, I'm going to do the template pipeline number 3. And the prefix is the same as what I put in there.
I don't have that.
A lot of these are just defaults.
Repository is something I will need.
So the repository I can get from here. I'll need the repository name and it's the branch 'test'. It's pulling this data from what I entered in on the repository. So that's nice. I don't need to re-enter that data and I will run the deploy script right now.
So now what it's going to do, it's going to basically create a change set and then I will execute the change set. So if you worked with AWS before, this will be familiar. It's the exact same thing. So there's my chain set. I'm going to hit yes to deploy.
Okay. So it deployed the pipeline. So the pipeline is now created. Now that the pipeline is created, it is going to go check the repository for code and it will then deploy the application. So I'll let you see what that looks like.
I can go and look at the pipeline to see how it's going.
So here's the pipeline right now. It's currently in the build phase and then it's going to deploy. If I go over to CloudFormation, you'll see that it's currently working here.
So, basically there's two stacks. So, there's the cache-data-app test-pipeline. That's what I deployed using those using that script and that's what we were waiting for when all this-- let's see here.
Too many windows! -- when all this was happening. That was the pipeline being deployed and now we are deploying the application. So there's two separate stacks. There's the pipeline then there's the application. The pipeline we only needed to do that once. Now the stack for the application that is automatically going to be updated every time we commit code.
Create is complete.
Awesome! Worked on the first try!
Go to the test endpoint and here we go. We have data!
All right. So what we're going to do next is we're going to modify the application to do pagination and that means that we are going to have to create limit and all that fun stuff which I mean I could do by hand. I've done it but let's not do that. So, I have the Cache-Data and I'm going to go back to the dev branch.
All right, let's go ahead and let's look at the code.
We have the template. We have an Open API spec here. We have the dashboard and we have the configuration file. I think everything here is good. What I will want to do-- oh I don't need to add in any of this. This is already taken care of. I'm going to enable logging. I believe that's enabled.
All right.
So, how do I install it from the main branch rather than the latest release?
There we go. Ah, makes sense.
Creating the Second Application (Endpoint) (57:10)
Okay. So, what I'm going to do is I am going to create a new application. Now this is going to be the endpoint.
So I'm going to have two applications running. One is going to serve as the endpoint. One is going to be the one that consumes the data. This is the one that consumes the data. So if I go back here. I'm essentially going to do another repository and this one I'm going to name 'endpoint'.
So this is going to be the 'cache-data-testing-endpoint'. Going to go through the same process I did before.
Um I am actually just going to do the pipeline right now.
What did I name the other one?
All right. So, I'm going to deploy it.
And once this gets started, I'm going to quickly clone the repository and merge.
All right. So, that's working. Now, I'm on a race! Let's see how quickly I can do this.
Okay. What's repository name?
So again, want to make sure I'm not in any other repository. So I'm going to do this clone it.
And I'm going to 'cd' into it.
Right. And then I'm going to make sure that I go back.
All right, just wanted to bring this one back to where I am here. I'm in the testing application. Testing application. I'm going to go to the endpoint.
This must be the endpoint one. Okay.
All right. So now I'm pushing it.
So this is going to create an end point. Now what I will need to do is once this is deployed currently the testing application is using this host as the endpoint. I need to change this to go to the new endpoint that I'm creating right now.
So let's see where this is.
Okay, so it's still deploying.
Just go ahead and go to CloudFormation right now.
Okay, it's done. So now I should be able to go to the endpoint and it should be the exact same as what the previous endpoint is. It is. All right.
So now this because this is the endpoint test. this is what we're going to be using. So I'm going to copy that and we're going to go to we're going to go to Kiro going to do the test application.
Connecting the Two Applications (1:04:07)
So we're going to do the-- going to put the host in and we have the path.
All right. Now there's no pagination yet.
So because these are the same applications, both these applications are actually pulling from a third API which will change, we'll update. But the problem is that they're pulling from another API and then they're actually changing the data that comes across. this is actually enhanced data. It's demo data, but it's just enhanced. What really comes across is just a list of various games. So we obviously need to change the test endpoint to not modify the data that comes back. So we'll have to test that one out. I'm going to go ahead and I'm going to deploy this.
This is the application. I'm going to go ahead and just deploy this for right now.
Now, of course, this won't work right away. We need to change the test endpoint first.
So...
Modifying the Endpoint to Return Raw Game Data (1:06:08)
All right. So, we've got a view here.
Let's see here. Yep. So, we're returning the data in enhanced format, but we don't want to be doing that. We actually want to be returning it as is.
So, we're going to delete this.
We're going to delete this.
And delete that because we won't need that.
And the format that the games are going to come in because the endpoint that these are getting the games from... is this format here. So this is the format that we that we need to mimic.
So, I'm not going to do any mapping or anything like that. All I'm going to do is take the game choices.
Okay. And then what I'm going to do is...
Yeah, this should be all I need to do.
Let's go ahead and test this out.
Because all I'm doing is it's still calling that endpoint, but I only want the game choices. I don't want anything else. I'm not doing the transform or anything like that.
Um, but the other one is still expecting that.
But, if I want to implement pagination, that's that's not what I want. So, I'm going to change this and then I'm going to go back and I'm going to change the other one.
Actually, we're going to do this for right now.
All right. So, we'll see. We'll see what that does.
Now what this will allow me to do is just make incremental changes. What I really want to do is go to the service... and for the testing endpoint I don't need to be calling that external service. So I can just and-- and I want it to be generating a whole bunch of data.
So we're no longer calling the service.
Instead, what we're doing is we're actually going just going to be... going to be doing that.
It doesn't really need to be in a try block, but...
I think the goal for today is just to get these two applications talking to each other.
We won't need this anymore.
And we really well, I'm I'm going to be modifying this, so I'm going to keep the catch.
Looks like we don't need this anymore either.
All right. Nice thing about not having the dev repository hooked up to a pipeline is that you can just keep committing to it and you're not deploying anything.
All right.
Pipeline... It's currently deploying.
So yeah, so if I can get these two applications to talk to each other, that'll be good because then I'll have have the applications set up and then I can go through and modify the what the one application expects and implement pagination.
In fact, I might just be able to do that now.
So, the application is expecting... to transform. So, this is the application.
It's getting game choices. So, when I update the other one, I'm going to remove that because I'm just going to be returning... an array.
Let me rethink this.
I basically want this application to return an end result. And I'll keep game choices.
That'll be my 'items'.
Yeah.
Yeah, that'll be fine.
So this one is going to be done for right now. The other one is the one that I'm going to update pagination on.
All right, let's go and check-- check things out.
It deployed. This one is the endpoint test.
I must not have deployed that yet. I thought I did. Maybe it's still deploying.
Deployment Failure from Broken Tests (1:17:34)
Oh, it failed.
Why did it fail?
Oh, it was probably running a test.
Yep, it-- it's running tests.
So, I basically tried modifying it and-- I tried modifying it, but there's tests and of course the test failed because I modified it and it's no longer returning what it should.
So, I'll have to fix that.
So, not the best start, but it is a start.
So, what have I accomplished?
Probably very little.
This is why this is why thinking things through is very very helpful.
I'm going to have to come back with a different game plan because I need to make sure that the endpoint produces paginated results and that the application can consume paginated results.
One thing I can do for the testing endpoint is disable the tests.
Now I'm doing two installs here. I'm just going to leave it for right now. Um-- okay.
All right. So that should deploy it.
Iterating on the Endpoint Response Format (1:20:50)
So, what I'm going to do is I'm going to spend some time. I am going to-- um here I have game choices. That's good. I'm going to remove this hidden games.
Um, what I'm going to want to do is... Okay.
Right.
Wonder if that'll work.
I'm just again this is not production so it's no big deal... until it is a big deal.
So I mean I executed two changes.
Okay. So, the first one went through just fine. Let's go ahead and just test that out.
It's exactly what I wanted. So, it looks exactly like the original except it has 'items'.
All right, let's change that. It's going to have 'items' then 'gamechoices'.
That's because it's doing this.
Okay. So what I want to do is right now this is returning game choices within 'items' and I don't want that. I want the top level to be game choices instead of items. I want that because by default pagination as it's implemented is looking at items but I want to when I go to paginate from this API I want to be able to give it a custom items name which is going to be game choices. So it's going to be a list of choices here.
And what I will do in the future after I get this deployed-- so there should be a change already. Should just be the game choices. It is just the game choices. So that's looking good in terms of like how things are. That's exactly what I said. I'll like figure out a longer list of all these game names, but I'll do that. But yes, I want to move this up here. So that was my next change.
I'm getting there.
Kiro Code Completion for Generating Game Names (1:26:24)
Once that happens, then the other stack should be pulling things perfectly.
And then it will be a win for the day.
So this here when add 50 more random games to the results of game choices, the code complete was pretty good. But I haven't really done this in Kiro before in terms of code completion. I typically use Amazon Q for that. I'd be curious to see if it can do like random game names.
It can look at that! So, this is actually more interesting than a whole bunch of numbers.
So, we've got Space Invaders... definitely speaking to my heart. So now I'm going to want to add those.
There you go.
All right.
So, this is what we're going to do. So, we got the game names. We got game names.
Yeah. I mean, I guess I could have just done those, but hey, why not?
Again, I have not used Kiro for code completion yet. I've always done spec-driven development. I've only used Q for that. So, it looks like we are good here. Let's just see what our status over here is. That looks good.
So, we got 'gamechoices' 'gamechoices'.
Okay. So before we deploy this go to the view again.
We're the end point.
It's not even supposed to be what it is.
Okay, so basically we've got 'resultsFromSvc.gamechoices' is what we need. We're going to pass that to 'gamechoices' and we're going to do that here. I mean I could just do that but I'm using game choices in both here. So that should work. So let's see how that goes. but my other question is why didn't Why do we not have length here?
Well, let's just see what happens.
Both Applications Talking Successfully (1:31:26)
Okay. And we have my changes.
All right. So, we have this pretty much where I want.
We've got the count of 20. We've got the game choices. I might add like a few other functions here to like you know make it look more like it's really paginating or producing paginated responses.
Now let's check out this endpoint and hope that it can consume that.
It does! Success!
All right. So what we have is we have two endpoints. One endpoint is producing a list of up to 20 and we're going to paginate that. So we're going to add some query parameters such as 'take', 'skip', 'limit', however it is that we want to code that. And so it's producing a list and it's just going to be a random list. And this endpoint is being consumed by this endpoint that is then transforming that data.
But what we want in the end is that we don't want a list of 20. We want a list of 50 games. So when we take this list of 50 games or it's going to have to so for 50 games and it's only taking 20 at a time, it's going to be take making three total requests. It's going to make the initial request and that is going to let it know how many pages there are or how many total games there are. It's going to be getting 20 at a time. So then it knows it's going to need to make another request and another request and it's going to do those concurrently. And if Kiro implemented everything correctly that we asked it to in cache data, it should do pagination and we should be able to see that in Amazon X-Ray. then what we're going to do is we're going to throw in some chaos. We'll randomly kill a connection or return incomplete results and with that will force a retry and that will test the retry.
So that's what we're going to do next time.
Next Steps: Pagination Testing, Chaos/Retry Testing & Packaging (1:34:43)
Everything's working right now. We basically have our test structure somewhat set up, but at least the two applications are running. They're talking to each other and we've got a good a good starting point.
So, all my documents are in GitHub. So, if you wanted to see the prompts that I was using for the prompts, the specs, the steering documents, all that stuff, that's all in the 63K cache data or "63klabs/cache-data" repository on GitHub.
And when you go there, you can check all that out, see how I've been using it, my workflow, and I'll continue on in the next video. And we will actually start using either Kiro or Q depending upon the project. I can certainly, you know, compare the two and just kind of see how they work. But I'd also like to kind of test out coding... just, you know, predictive coding as you had already seen me do with with the games area. I kind of want to try that out a little bit more because that's something I haven't done in Kiro yet. I so I don't know how that compares with Q. And I want to do maybe a little bit of vibe coding. And then to finish it up, I actually want to package the application so that it's the two the two test applications so that I can use Kiro to rapidly add on new features to test things that I'm doing with with Cache-Data. That way I have a proper framework to test Cache-Data on.
So I'll continue on with that in the next video.
Outro (1:36:51)
So, that's pretty much all I've got to say. I hope you enjoyed the video. as they say, go build and thanks for watching.