This is a second part of the article of last week.
This time we are going to process on the fly a 30 million lines remote file, summing the numbers of each line. We will see how to implement the functions we need to convert a stream of chunks (the one built in the part 1) to a stream of lines. We’ll then run a benchmark of two different implementations.
We will then see how easy and quick is to process just the first 30 lines of the same 125mb file.
Read next Getting Started with Phoenix LiveView