You had no improvements essentially because you are not using Dask for the right use case. Dask, as Spark, is designed for data that do not fit your central memory, translating your ML code into a computation graph that can be distributed across several cores or machines, very much in the flavor of tensorflow.

While sklearn has improved over the years, Dask is still useful for massive big data tasks that can be converted into a graph.

Written by

Managing Director @ amethix.com Chief Software Engineer & Host @datascienceathome.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store