diff --git a/en/blog/aws_lambda_a_virtual_podcast/index.html b/en/blog/aws_lambda_a_virtual_podcast/index.html index 216df61..31ef186 100644 --- a/en/blog/aws_lambda_a_virtual_podcast/index.html +++ b/en/blog/aws_lambda_a_virtual_podcast/index.html @@ -13,9 +13,9 @@ provides an easy-to-use test/mocking framework for developing Cloud applications. At this stage, their focus is primarily on supporting the AWS cloud stack.
LocalStack spins up various Cloud APIs on local machine including S3, lambda, DynamoDB and API Gateway. All you need to do is, spin up LocalStack docker container, deploy your infra say Dynamo table or lambda function within LocalStack and connect to these services running on local machine from within your code.
Me> Interesting. Does LocalStack support all AWS services?
Hernandez> No, it supports quite a few but definitely not all.
I am sure Unit testing with AWS Lambda function code is understood by all of us but what is good to know is LocalStack can be used for integration testing.
. . .
Me> Jessica, you talked about unzipped code. Does that mean you have to create a zip file and upload it somewhere?
Jessica> Well, you have package your lambda function along with its dependencies as an archive, upload it either on AWS Lambda console or in an S3 bucket which will be referenced from your CloudFormation template.
Me> How do you folks package your application? It appears to me as if we need to create a “fat jar” kind of a thing.
Hernandez> We use typescript for coding our lambda application and webpack for packaging it. It does not create a zip file, just an out directory containing the transpiled code (js) and a handler.js file with all the required code from different node_modules plus its source map.
Me> How do you deploy your code then because you just seemed to create an output directory with a few javascript files.
Hernandez> We use CDK -for deploying our code which allows you to code your infra.
Me> Wow, the list of tools doesn’t seem to come to an end.
Hernandez> It’s simple. Just look at it this way, we have just created a directory which is ready to be deployed and moment you say cdk bootstrap, it will copy the contents of this out directory into another directory which will be archived and uploaded to an S3 bucket.
And when you say cdk deploy, you will see all the required AWS components getting deployed. Simple.
Me>Simple? You said contents of this out directory will be copied into another directory. Does that mean CDK already knows about the out directory?
Hernandez> That’s true. When you code your infra, you will specify where is your compiled (or transpiled) or ready to be shipped code located and that’s how CDK knows about this directory.
Me> Great, now I able to connect dots. Build your code -> get a shippable directory -> archive it -> upload it to an S3 bucket -> deploy it and CDK is one way to get all these steps done. Is that right?
Hernandez> Absolutely.
In order to deploy yours lambda function, it needs to be packaged along with its dependencies as an archive. You could use webpack if you are using typescript as a programming language. You can use CDK, CloudFormation or SAM for packaging and deploying your lambda function.
. . .
Me> Jessica, Hernandez, what are the different types of applications that you folks have built using AWS Lambda?
Jessica> We have actually built serverless microservices using AWS Lambda and we also process web clicks on our application which is a stream of events flowing from user interface to AWS Pinpoint to AWS Kinesis to AWS Lambda.
Hernandez> We use AWS Lambda for scaling down images that are uploaded to our S3 buckets and for processing DynamoDB streams which is a stream of changes in DynamoDB table.
Me> Thanks Jessica and Hernandez.
Our panel highlighted different types of applications they have built using AWS Lambda including microservices, event processing (images on S3 buckets) and stream processing (web clicks and handling changes in DynamoDB).
. . .
With this we come to an end of our “Virtual Podcast” and a big Thank you to Jessica and Hernandez for being a part of this. This was wonderful, and hope our readers (yes, it is still virtual) find it the same way. Thank you again.
Me> Wow, the list of tools doesn’t seem to come to an end.
Hernandez> It’s simple. Just look at it this way, we have just created a directory which is ready to be deployed and moment you say cdk bootstrap, it will copy the contents of this out directory into another directory which will be archived and uploaded to an S3 bucket.
And when you say cdk deploy, you will see all the required AWS components getting deployed. Simple.
Me>Simple? You said contents of this out directory will be copied into another directory. Does that mean CDK already knows about the out directory?
Hernandez> That’s true. When you code your infra, you will specify where is your compiled (or transpiled) or ready to be shipped code located and that’s how CDK knows about this directory.
Me> Great, now I able to connect dots. Build your code -> get a shippable directory -> archive it -> upload it to an S3 bucket -> deploy it and CDK is one way to get all these steps done. Is that right?
Hernandez> Absolutely.
In order to deploy yours lambda function, it needs to be packaged along with its dependencies as an archive. You could use webpack if you are using typescript as a programming language. You can use CDK, CloudFormation or SAM for packaging and deploying your lambda function.
. . .
Me> Jessica, Hernandez, what are the different types of applications that you folks have built using AWS Lambda?
Jessica> We have actually built serverless microservices using AWS Lambda and we also process web clicks on our application which is a stream of events flowing from user interface to AWS Pinpoint to AWS Kinesis to AWS Lambda.
Hernandez> We use AWS Lambda for scaling down images that are uploaded to our S3 buckets and for processing DynamoDB streams which is a stream of changes in DynamoDB table.
Me> Thanks Jessica and Hernandez.
Our panel highlighted different types of applications they have built using AWS Lambda including microservices, event processing (images on S3 buckets) and stream processing (web clicks and handling changes in DynamoDB).
. . .
With this we come to an end of our “Virtual Podcast” and a big Thank you to Jessica and Hernandez for being a part of this. This was wonderful, and hope our readers (yes, it is still virtual) find it the same way. Thank you again.
Try adjusting your search query
I guess we are ready to do TDD as well for Serverless.
Finally, we have come to an end of our first article where we made an attempt to design a small part of a serverless application that uses AWS Lambda, API Gateway and DynamoDB.
As a part of this application we have tried to draw some parallels with the MVC design pattern and bring the same to the serverless world.
Items that we have left:
I am sure you will be able to fill these gaps and at this stage, I will move forward.
There is a lot of work still left before we can deploy the code:
Code is available here .
Let’s move on to our next -article that explores integration testing using Localstack for our serverless application.
Try adjusting your search query
Bitcask has a relatively simple data structure compared to LSM , and it offers some great positives:
Bitcask model has a set of challenges:
The code for this article is available here -.
Try adjusting your search query
Thank you, Debasish Ghosh for reviewing the article and providing feedback.
Probabilistic data structures provide approximate answers to queries about a large dataset rather than exact answers. These data structures are designed to handle large amounts of data in real-time by making trade-offs between accuracy and time and space efficiency. ↩︎
LSM Tree -A log-structured merge tree (LSM tree) is a data structure typically used when dealing with write-heavy workloads. The write path is optimized by performing sequential writes. ↩︎
Try adjusting your search query
If a team believes there is delivery pressure today and tests can be added tomorrow, then the team needs to be sure of one thing - “That tomorrow is never coming”.
I would like to thank Gurpreet Luthra , Unmesh Joshi and Sunit Parekh -for providing feedback on the article. Thank you Gurpreet, Unmesh and Sunit.
Try adjusting your search query
These commands make a few assumptions -
dist/
directory which will be deployed on an S3 bucket (bootstrap creates for us) when we executecdk bootstrap
, already existsIt will take sometime for stack to be created which will consist of lambda function, DynamoDB table, API gateway and all the necessary IAM roles
.
Once our stack is created, make an entry in the orders
table, hit the public API endpoint which will look like https://rest-api-id.execute-api.ap-south-1.amazonaws.com/dev/orders/OrderId
and enjoy the output.
That’s it, our stack is deployed and our application is up and running 😁
Relationship between CDK and CloudFormation can be summarised as -
In this article we were able to code our infra using CDK, write tests for our infra and deploy the same. Let’s take a look at some advantages of using CDK -:
We have finally come to end of our Serverless Journey series. Hope you enjoyed it.
We have finally come to end of our Serverless Journey series. Hope you enjoyed it.
Try adjusting your search query
Thank you, Debasish Ghosh for reviewing the article and providing feedback.
Probabilistic data structures provide approximate answers to queries about a large dataset rather than exact answers. These data structures are designed to handle large amounts of data in real-time by making trade-offs between accuracy and time and space efficiency. ↩︎
Big O notation -In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. ↩︎
Try adjusting your search query
The result for cache-hits in CacheD is available here .
That’s it. We have all the building blocks needed to build an in-memory LFU cache.
The source code of CacheD is available here and the crate is available here -.
Try adjusting your search query
number
and this
reference on the stack. We need this
reference to be able to get the value of instance variable n
number
variable from slot 2 and this
reference from slot 0 on the stackn
sum
and number
variable to be able to perform additionsum
number
Following diagram represents the overall execution of sum
method -
Let’s conclude with some key takeaways -
Hope it was meaningful. Appreciate the feedback.
Hope it was meaningful. Appreciate the feedback.
Try adjusting your search query
We believe the MVP is done and features like flipping at runtime and supporting database-driven feature flips are in the pipeline.
For any custom flip condition, one could go ahead and use @FlipOnSpringExpression
with your custom spring bean to determine the flip condition.
If you want to have a look at the code or even want to contribute, you can check out Flips
-.
Feel free to share any feedback.
Feel free to share any feedback.
Try adjusting your search query
We often have a lot to share with people, this could be our learnings, our opinions and our experiences. There are times when we feel the need to get our ideas validated or get feedback from people. These are definitely some of the reasons to invest in blogging and connect with community.
Let’s see some reasons for investing in writing blogs.
There are times when we often feel “it would have been great if someone had written an article to explain a concept”, start writing if you have had this feeling.
Learning is like climbing a rock. While climbing, we always look at the tip of the rock just to realize it is too far away. What is also important is to realize that there are people who might have just started this journey and your “learning journey” could go a long way in helping them.
Share your mistakes with the community. We as a community learn from each other’s mistakes and experiences and these things are really valuable.
Your blog on “Failing with Microservices” could help me in avoiding some mistakes or at least rethink my design if I am starting with microservices.
There are a lot of things which help us grow as an individual and one of them is feedback or I should say “Constructive Feedback”. Write to get feedback from community, to get their thoughts, to hear their experiences and to learn from all of these. Let’s see how this could work.
Say, I am very excited to use Coroutines to build reactive streams in my next project and I share an article “Being Reactive with Kotlin Coroutines” which talks about the basics of Coroutines and abstractions like “Channel” to implement reactive streams.
This article receives a lot of feedback and one of the feedback says -
Hey, nicely put. I would suggest you to check this link. It says - “There is no way to receive the elements from a channel again. The channel is closed when the producer coroutine is over and the attempt to receive from it again cannot receive anything." -You might also want to take a look at kotlinx-coroutines-rx2.
Now, this is important as it helps me understand a lot of dimensions including backpressure, hot and cold observables which I had not considered. Thanks to the feedback, I got pointed in the right direction.
. . .
Investment is tricky and one expects a return from every investment. Let’s see the overall “return over investment” in blogging.
Like we learn when we teach people, we also learn when we share our ideas with people. Blogging helps in solidifying our understanding and the reason I say this is -
We try to communicate our ideas in the simplest possible manner to our readers. In order to do this, we choose to take small steps and each of these steps is well thought of and analyzed. Each step in turn teaches us something which improves our understanding.
We were talking about DSLs in Kotlin in one of the workshops, and I happened to like the way that topic was built - from lambdas to extension function to lambdas with receiver to invoke function.
I decided to share the same in an article Kotlin DSLs: The Basics and if I look back, I realize these two things - a workshop and an article have really helped me understand Kotlin DSLs well.
You are not afraid of reaching out to people, and you are not the same person you used to be who would think “should I share this, people would already know it”, “this tech was released 5 years back and I am writing about it now, does it make sense?”. You become someone who would share his/her ideas with confidence.
You challenge yourself to write better every time. You tend to experiment with different styles of writing in an attempt to communicate your ideas clearly and connect with people better.
You tend to wear a writer’s hat every time you sit to share something. An attempt is made is to talk to the readers through your article which acts like a story. You read your article hundreds of times in an attempt to articulate better. All this does is make you a better articulator of thoughts.
Investment in blogging is a great way to build network, you get to know people and people get to know you.
Networking is very powerful and truly magical, it can surprise you with a lot of opportunities which you might not imagine. You might get to speak at conferences, work with people that you follow and many more.
Investment in blogging is really a simple way to build network !!
Investment in blogging acts as a great tool to build your and your organization’s brand.
“Return over investment in blogging” looks promising, but we need to be aware of the fact that a return might not be immediate for an investment.
Making an investment is the first step and usually the most difficult step, rest is all about return :)
Take your first step with blogging, share your ideas / opinions / thoughts with the community. It is a great tool which does a lot of magic, has got great return and more importantly “it is fun”.
Invest in blogging.
Now, this is important as it helps me understand a lot of dimensions including backpressure, hot and cold observables which I had not considered. Thanks to the feedback, I got pointed in the right direction.
. . .
Investment is tricky and one expects a return from every investment. Let’s see the overall “return over investment” in blogging.
Like we learn when we teach people, we also learn when we share our ideas with people. Blogging helps in solidifying our understanding and the reason I say this is -
We try to communicate our ideas in the simplest possible manner to our readers. In order to do this, we choose to take small steps and each of these steps is well thought of and analyzed. Each step in turn teaches us something which improves our understanding.
We were talking about DSLs in Kotlin in one of the workshops, and I happened to like the way that topic was built - from lambdas to extension function to lambdas with receiver to invoke function.
I decided to share the same in an article Kotlin DSLs: The Basics and if I look back, I realize these two things - a workshop and an article have really helped me understand Kotlin DSLs well.
You are not afraid of reaching out to people, and you are not the same person you used to be who would think “should I share this, people would already know it”, “this tech was released 5 years back and I am writing about it now, does it make sense?”. You become someone who would share his/her ideas with confidence.
You challenge yourself to write better every time. You tend to experiment with different styles of writing in an attempt to communicate your ideas clearly and connect with people better.
You tend to wear a writer’s hat every time you sit to share something. An attempt is made is to talk to the readers through your article which acts like a story. You read your article hundreds of times in an attempt to articulate better. All this does is make you a better articulator of thoughts.
Investment in blogging is a great way to build network, you get to know people and people get to know you.
Networking is very powerful and truly magical, it can surprise you with a lot of opportunities which you might not imagine. You might get to speak at conferences, work with people that you follow and many more.
Investment in blogging is really a simple way to build network !!
Investment in blogging acts as a great tool to build your and your organization’s brand.
“Return over investment in blogging” looks promising, but we need to be aware of the fact that a return might not be immediate for an investment.
Making an investment is the first step and usually the most difficult step, rest is all about return :)
Take your first step with blogging, share your ideas / opinions / thoughts with the community. It is a great tool which does a lot of magic, has got great return and more importantly “it is fun”.
Invest in blogging.
Try adjusting your search query
That is it, run the test and get all the values from linked list.
The code for this article is available here
That is it, run the test and get all the values from linked list.
The code for this article is available here
Try adjusting your search query
The operator function invoke()
can come in handy while building DSL.
invoke
along with lambda with receiver as function parameterThe operator function invoke()
can come in handy while building DSL.
invoke
along with lambda with receiver as function parameterTry adjusting your search query
Immutable Classes: It would be good to see something like a readonly
or immutable
modifier to create an immutable class. The below-mentioned code snippet is simply a thought (not available in Kotlin or Java).
//Hypothetical
immutable class User(private val name: String, private val id: Int)
-
As developers, we will always make mistakes (skipping NULL checks, mutating a collection, etc.), but providing features at the language level that can stop these mistakes will make our lives easier and prevent mistakes.
As developers, we will always make mistakes (skipping NULL checks, mutating a collection, etc.), but providing features at the language level that can stop these mistakes will make our lives easier and prevent mistakes.
Try adjusting your search query
I cheated again. Did lot more than what I should have done, renamed methods to be text*, duplicated for loops (over rentals) to calculate the total amount, repeated the same in the textBody()
method.
Is that justified? Well, how many rentals do we expect to have for a customer? What is the cost of iterating over them twice? If it is not significant, go ahead and use it. What does it give me? Look at the statement()
(renamed as textStatement()
) method now.
Jessica> Now, we are done with refactoring. We can introduce the HTML statement functionality now.
Jessica and Scott went on to implement the HTML functionality (with tests) and they did a lot to clean up the existing code. The code is much more understandable that it used to be.
They might not have cleaned up everything, but they have left a great deal of understanding trace for others to follow.
They followed Cover and Modify, Boy Scout rule, Refactoring cycle and refactored enough to finish the new functionality, in-short dealt with Legacy code professionally.
I cheated again. Did lot more than what I should have done, renamed methods to be text*, duplicated for loops (over rentals) to calculate the total amount, repeated the same in the textBody()
method.
Is that justified? Well, how many rentals do we expect to have for a customer? What is the cost of iterating over them twice? If it is not significant, go ahead and use it. What does it give me? Look at the statement()
(renamed as textStatement()
) method now.
Jessica> Now, we are done with refactoring. We can introduce the HTML statement functionality now.
Jessica and Scott went on to implement the HTML functionality (with tests) and they did a lot to clean up the existing code. The code is much more understandable that it used to be.
They might not have cleaned up everything, but they have left a great deal of understanding trace for others to follow.
They followed Cover and Modify, Boy Scout rule, Refactoring cycle and refactored enough to finish the new functionality, in-short dealt with Legacy code professionally.
Try adjusting your search query
Hold on, did we just copy the output generated by the code and placed into our test. Yes, that is exactly what we did.
We aren’t trying to find bugs right now. We are trying to put in a mechanism to find bugs later, bugs that show up as differences from the system’s current behavior. When we adopt this perspective, our view of tests is different: They don’t have any moral authority; they just sit there documenting what the system really does. At this stage, it’s very important to have the knowledge of what the system actually does.
Question> What is the total number of tests that we write to characterize a system?
Answer> It’s infinite. We could dedicate a good portion of our lives for writing case after case for any class in a legacy code.
Question> When do we stop then? Is there any way of knowing which cases are more important than others?
Answer> Look at the code we are characterizing. The code itself can give us ideas about what it does, and if we have questions, tests are an ideal way of asking them. At that point, write a test or tests that cover good enough portion of the code.
Question> Does that cover everything in the code?
Answer> It might not. But then we do the next step. We think about the changes that we want to make in the code and try to figure out whether the tests that we have will sense the problems that can happen. If they won’t, we add more tests until we feel confident that they will.
There is so much to refactor in legacy code, and we can not refactor everything. In order to answer this we need to go back to understanding our purpose of refactoring the legacy code. We want to refactor legacy code to leave it cleaner than what it was when it came to us and to make it understandable for others. With that said, we want to make the system better keeping the focus on the task. We don’t want to go crazy with refactoring trying to improve the whole system in a few days. What we want to do is refactor the code that comes in our way of implementing any new change. We will try and understand this better with an example in the next article.
Let’s move on to the next article -that explains how to deal with legacy code.
Try adjusting your search query
Try adjusting your search query
LocalStack exposes an environment variable LOCALSTACK_HOSTNAME
which is available inside the docker process that refers to the host machine.
That’s it run all the tests npm t
and see them pass 😁
We used LocalStack to test our application. Everything is available here .
Here is a quick glimpse of the sequence of events that happen when the integration tests are executed.
Let’s move on to our last article -and see everything in action on an AWS account.
Try adjusting your search query
MergerIterator is available here .
We have finally reached here :).
LSM-tree based storage engines typically include the following data structures:
LSM-trees offer higher write throughput because the writes are always sequential in nature, but reads are not so great because LSM-trees may have to scan multiple files or portions of multiple files. One idea to improve the reads in LSM-trees is to reduce the size of SSTable files and cache some layers of SSTables.
When designing a storage engine for SSDs, we should consider SSD characteristics including:
The core ideas of WiscKey include: separating values from keys in the LSM-tree to reduce write amplification and leveraging the parallel IO characteristic of SSD.
I hope the article was worth your time. Feel free to share the feedback.
Thank you, Unmesh Joshi -for reviewing the article and providing feedback.
Try adjusting your search query
Try adjusting your search query
Feature sendEmail is enabled if the property feature.send.email
is set to true
.
I have a separate blog -about this library.
Try adjusting your search query