Sharing your work online effectively

A few tips on sharing your work online effectively.
blogs
sharing
Published

April 20, 2020

Well, you have put a lot of blood and sweat into writing your latest blog post on Machine Learning. Don’t let your struggle go in vain and let the world know about it. Sharing your blog posts across different channels not only gives you exposure but also may get you tremendous feedback on your work. In my personal experience, the feedback has been super useful for me to improve myself not only as a writer but also as a communicator. There can be times you might have missed out on a super important detail, or you might have unknowingly introduced a snazzy bug in the code listings of your blog – those things could have been caught in the process of feedback interchange.

In this short article, I am going to enlist a few different ways to share your work and get feedback. Note your work can be anything starting from a crucial GitHub PR, to a weekend project. Although the following platforms and communities are mostly limited to Machine Learning, I hope this guide will be useful for tech bloggers in general.

Sharing on platforms/communities

Before I start the sharing process, I generally create a Google Doc to effectively keep track of where I am sharing my work. This essentially acts as a checklist for all the places I want to share my work on. Here’s the template I follow for creating the Google Doc -

  • Link to where the work has been posted.

  • Brief description of the work.

  • Post table:

I generally keep the description to a maximum of 280 characters so that I can use it on Twitter as well.

Now, turning to the platforms and communities, here are some recommendations (in no particular order): - HackerNews (https://news.ycombinator.com/newest) - Made With ML (https://madewithml.com/) - Reddit - r/MachineLearning - r/MachinesLearn - r/learnmachinelearning - r/deeplearning - Twitter - Facebook - AIDL - Montreal AI - Deep Learning - Fast.ai Forum (https://forums.fast.ai/) - LinkedIn - Google Groups (depends on the framework used in the work) - discuss@tensorflow.org - tflite@tensorflow.org - tfjs@tensorflow.org - tfx@tensorflow.org

While sharing my work, I find it to be important to always attach a brief description. Additionally, if your work is related to implementing research work, you should definitely include it on Papers with Code.

Sharing to aid discussions

You might be active on online forums like Quora, StackOverflow, and so on. While participating in a discussion in those forums you can make effective use of your work if it is relevant. In these cases, the approach is to not just supply a link to your work, but also to first write about any important pointers relevant to the discussion first, and then supply the link to your work to better aid it. Let’s say there’s a discussion going on the topic of “What is Weight Initialization in Neural Nets?” Here’s how I would approach my comment:

A neural net can be viewed as a function with learnable parameters and those parameters are often referred to as weights and biases. Now, while starting the training of neural nets these parameters (typically the weights) are initialized in a number of different ways - sometimes, using constant values like 0’s and 1’s, sometimes with values sampled from some distribution (typically a uniform distribution or normal distribution), sometimes with other sophisticated schemes like Xavier Initialization. The performance of a neural net depends a lot on how its parameters are initialized when it is starting to train. Moreover, if we initialize it randomly for each run, it’s bound to be non-reproducible (almost) and even not-so-performant too. On the other hand, if we initialize it with constant values, it might take way too long to converge. With that, we also eliminate the beauty of randomness which in turn gives a neural net the power to reach covergence quicker using gradient-based learning. We clearly need a better way to initialize it. Careful initialization of weights helps us to train them better. To know more, please follow this article of mine.

Well, that’s it for now. I hope it proves to be useful for you. Please provide any suggestions you may have via the comments. I am thankful to Alessio of FloydHub for sharing these tips with me.