
Earlier we were using jupyter notebook to create data models but that required lots of Memory & CPU for large data also we were doing all the things manually. After adopting lambda now we are doing all this things through pipeline. Once the code pushed to GitHub at auto deployed in lambda & after processing data from S3, we are creating data model with custom memory and timeout settings. Review collected by and hosted on G2.com.
It has some memory constraint we can atmost go to some GBs only. Rest it has many language support that is good. But it currently not supporting my old version codes like python 2.7 code. That I'm doing editing to work with lambda as well :-) Review collected by and hosted on G2.com.
At G2, we prefer fresh reviews and we like to follow up with reviewers. They may not have updated their review text, but have updated their review.
Validated through Google using a business email account
This reviewer was offered a nominal gift card as thank you for completing this review.
Invitation from G2. This reviewer was offered a nominal gift card as thank you for completing this review.




