Practicing ML in a Non-ML Job

Thoughts on learning and exercising ML skills.
ml-practice
jobs
Published

May 27, 2022

Many people who aspire to become Machine Learning (ML) practitioners find it particularly difficult to continue to hone relevant skills when they pursue a job that does not involve even a tiny bit of ML. So, if you’re serious about choosing ML as a potential career option, it’s important to ensure you continue to practise what you’re learning along the way. Otherwise, there’d likely be nothing for a recruiter to trust in your candidature which, in turn, minimizes your chances of landing the ML job you always wanted.

I myself am not an exception to this. Back in 2017, when I was working at Tata Consultancy Services Limited (TCS), I didn’t get any assignments involving ML expertise. But I tried to utilize my off-work hours in a way that helped me improve my ML-specific knowledge as well as strengthen my candidature.

So, in this post, I’ll share what I did during those days in the hope of providing some meaningful ways for navigation.

Disclaimer: The opinions stated in this post are solely mine and they are not meant to demean anyone else’s opinions about the same topic.

Assumptions

The post is best-suited for professionals that have prior experience in coding (preferably in Python) and know their way around the fundamentals of ML. If you’re an absolute beginner then I recommend picking up a book (an example) or a course (an example) to get started. Also, if you haven’t yet picked up an ML framework (Scikit-Learn, PyTorch, TensorFlow, JAX, etc.), then I highly recommend picking one up.

Solace in Uncertainty

Set your objectives straight. Ask yourself if you’re absolutely certain about wanting to pursue a career in ML. If so, then are you willing to make the adjustments necessary to attain that at any cost? Although these questions are not specific to the purposes of this post, they help set a mindset to push through uncertain times.

I was hell-bent on taking up a career in ML that helped me to work on myself in those extra hours after work. It didn’t feel like I’m being forced into doing this. I thoroughly enjoyed the process and I trusted it. There are things I still enjoy doing like, reading a new paper, learning about a new concept, implementing it, etc.

Overwhelm and Courage to Learn

Feeling overwhelmed especially in the ML domain is common given how vast the field is and how rapidly it is evolving regularly. I see this positively because I know that there are things I don’t yet know and I use it as a learning opportunity to improve my knowledge.

One might wonder, do I learn each and everything that comes out? That’s impossible and likely, isn’t very helpful. So, I like to pick up something from the vault of things that genuinely interest me in ML and start digging deeper. I find it incredibly helpful in boosting my confidence. I also figured that the more I did this, the better I was able to develop a general understanding of a broad number of relevant things.

In a nutshell, treating the feeling of “overwhelm” as a learning opportunity has been quite helpful for me.

Learn, Apply, (Demo), Repeat

Learning is not enough. You need to be able to develop evidence that shows you can apply what you’ve learned successfully. I highly recommend reading this interview with Emil Wallner who’s an “internet-taught” ML Researcher working as a resident at Google.

Below, I discuss a few things you can do to exercise your ML learnings.

Kaggle

Kaggle is arguably one of the best platforms to develop skills for data preprocessing and applying ML in creative ways to solve unique problems. So, pick an interesting dataset or a competition just for learning purposes. Putting the competitive mindset aside, during my initial years it really helped me to develop a mindset of always learning to facilitate self-improvement. If you commit to it hard enough, you will have developed a bunch of useful skills. Over time, you’ll definitely get better.

Keeping an open mind for learning is important here because expectations of outcomes can quickly derail you. Also, remember that the rate of improvement is not the same for everyone. So, it’s better to just do things that are within your control (for example, learning something), and consistently get better at those.

Papers / Concepts

Reading research papers is a common undertaking in ML. It can be rewarding to summarize, implement, and blog about a paper that is impactful and tackles interesting problems. Extending on this theme, you have a number of options:

  • You can summarize a paper in your own words and publish it on platforms like Medium or even on your own blog. It’s also important to get feedback on your work. So, feel free to share your work on Social Media as well as let the authors of the actual paper know about your work. A paper summary is supposed to be a reflection of how you perceived the paper. So, if you have criticisms of a paper, do include those with solid reasoning. If you’re looking for an example, definitely check out Aakash Kumar Nain’s paper summaries.

    Picking a paper could be a non-trivial work especially when there’s always a flood of papers on arXiv. I usually follow the blogs of research labs at Google, Meta, AI2, BAIR, etc., to keep myself up-to-date about the work I care about. There’s a good chance you’ll find your niche there. Following the works of the most accomplished researchers from my favorite domains is another practice I incorporate.

    In this regard, I highly recommend the following two books that actively cite examples of relevant research papers and also implement them in ways that are practically beneficial: Deep Learning for Coders with fastai and PyTorch by Jeremy Howard and Sylvain Gugger, Natural Language Processing with Transformers by Lewis Tunstall, Leandro von Werra, and Thomas Wolf. For developing a general understanding of different areas in ML, I recommend the articles on Distill Pub.

  • Nowadays, a majority of ML papers come with official open-source implementations in the interest of reproducibility. But some don’t. Regardless of either, it’s a good exercise to try to implement the novel bits of a paper. The timmlibary is a great example of how paper reimplementations should be structured.

  • Blogging has easily become one of the most effective ways to communicate your understanding of something. This does not need to be just tied to papers, though. You can always pick up an interesting concept and blog about it. Many ML stalwarts keep pressing on why you should blog and here is one such example: Why you (yes, you) should blog by Rachel Thomas.

You can also consider making videos on papers, concepts, and so on. If you haven’t already, then definitely get to know Yannic Kilcher who has revolutionized the way forward in this theme.

Open-source Contributions

From my personal experience, I can confirm that making open-source contributions is one of the most useful ways to stay involved in the domain. All the popular ML Python libraries (Scikit-Learn, PyTorch, TensorFlow, Keras, JAX, Hugging Face Transformers, etc.) are open-source and that provides even more opportunities to learn and grow.

I have a separate presentation on this topic but here, I provide my perspectives for context:

  • When you’re contributing to a well-maintained open-source library for the first time there’s a high chance that you’ll learn a few things other than just ML. These include writing unit tests, setting up the local development environment, library building tools, etc. This way, you get first-hand exposure to how software engineering is approached in the ML domain in general.

    So, not only do you get to contribute to your favorite open-source library (which is an inexplicable feeling anyway), but you also get to learn skills that are practically quite demanding. Beyond these, you get a chance to interact with experts and get their feedback to improve your work. Additionally, you get to collect objective evidence of your skills - coding, thorough understanding of a critical component and the library, building a library, etc. - all of which are noteworthy.

    Note that you’re not alone if you’re feeling lost when you’re just starting to contribute to an open-source library. It happens to most. But when you put your mind toward making your contribution anyway, you get to get better in the process.

  • If you feel you’re not ready yet to make contributions, working on your own open-source projects is another promising avenue to pursue. Take Andrej Karpathy’s miniGPT project as an example. Besides being an amazing educational resource for learning about the GPT model, it serves as a great reference for implementing many of the foundational blocks of Transformer-based architectures.

    If you’re looking for open-source project ideas then my presentation on this topic might be helpful.

Now that we’ve looked into different ways of being engaged with our independent ML practice, let us take examples of two individuals from the ML community who have followed similar paths in this regard.

References from the Community

Matt (ML Engineer at Hugging Face) says -

[…] I did a few small projects to get familiar with Keras and then tried reimplementing papers or building examples to contribute to places like keras-contrib or keras.io.


Chansung (ML-GDE and MLOps Engineer) says -

[…] Anyways, I actually didn’t plan what to do for the next few years. I just have followed my interests and the joy to participate as a community member. And whenever I make any moves, I found other exciting events are waiting for me. These days, I am really enjoying creating open-source projects and applied ML products, and collaborative projects with you as well.


Both of them continue to push their boundaries for self-improvement and are exceptional at what they do.

Finishing Up

The pursuit of betterment doesn’t stop after you land the job you were aspiring for. I continue to benefit from my open-source engagements even after professionally working in the area for some time now. I hope you’re able to take forward the pointers discussed in the post and experiment with them. If you have any suggestions for other interesting ways for independent ML practice please let me know.

Acknowledgments

I want to thank all my wonderful collaborators and mentors who continue to inspire me to be better. I am also thankful to Neerajan Saha for proofreading this post.