Trending February 2024 # 6 Challenging Open Source Data Science Projects To Make You A Better Data Scientist # Suggested March 2024 # Top 2 Popular

You are reading the article 6 Challenging Open Source Data Science Projects To Make You A Better Data Scientist updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 6 Challenging Open Source Data Science Projects To Make You A Better Data Scientist


Here are 6 challenging open-source data science projects to level up your data scientist skillset

There are some intriguing data science projects, including how to put deep learning models into production and a different way to measure artificial intelligence, among others

Each data science project comes with end-to-end code so you can download that and get started right now!


When was the last time you took up a data science project outside your daily work? I’m certainly guilty of not doing this regularly. We tend to get caught up in our professional lives and slip up on the learning front.

That’s a step we simply cannot afford to miss! Data science is one of the fastest-growing industries right now thanks to the unprecedented rise in data and computational power. There’s no excuse to not know what the latest techniques and frameworks are in your space, whether than’s Natural Language Processing (NLP), computer vision, or something else.

And the best way to learn, practice and apply these state-of-the-art techniques is through data science projects.

This article is the perfect place for you to begin. I have put together six challenging yet powerful open source data science projects to help you hone and fine-tune your skillset. I have provided the end-to-end code as well for each project so you can download it right now and start working on your own machine!

This article is part of our monthly data science project series. We pick out the latest open source projects from GitHub and bring them straight to you. This is the 23rd edition of the series and we are grateful to our community for the overwhelming response that keeps this series going. Here’s the complete list of projects we have published this year so far:

Here are the 6 Data Science Projects We’ve Picked from GitHub (November Edition)

Open Source Deep Learning Projects

I haven’t come across a lot of work on 3D deep learning. That’s why I found this GitHub repository quite fascinating. The possibilities of 3D deep learning are tantalizing and potentially unique. Think about it – 3D imaging, geospatial analysis, architecture, etc. – so many data points at play!

Kaolin is a PyTorch library that aims to accelerate the research in 3D deep learning. The PyTorch library provides efficient implementations of 3D modules for use in deep learning systems – something I’m sure all you industry veterans will appreciate.

We get a ton of functionality with Kaolin, including loading and preprocessing popular 3D datasets, evaluating and visualizing 3D results, among other things.

What I especially like about Kaolin is that the developers have curated multiple state-of-the-art deep learning architectures to help anyone get started with these projects.  You can read more about Kaolin and how it works in the official research paper here.

And if you’re new to deep learning and PyTorch, don’t worry! Here are a few tutorials (and a course) to get you on your way:

Putting your machine learning model into production is a challenging task most aspiring data scientists aren’t prepared for. The majority of courses don’t teach it. You won’t find a lot of articles and blogs about it. But knowing how to put your model into production is a key skill every organization wants a data scientist to possess.

Now take that up a notch for deep learning models. It is a tricky and challenging task. You’ve built a robust deep learning model, sure, but what’s next? How do you get that to the end user? How can you deploy a deep learning model?

That’s where this Production-Level Deep Learning project comes in. We need several different components to deploy a production-level deep learning system:

The GitHub repository I have linked above contains toolsets and frameworks along with a set of best practices deep learning experts follow. I really like the way each step in the full-stack deep learning pipeline is mapped and summarized succinctly. I’ll be referring back to it whenever I’m working on deploying deep learning models in the foreseeable future.

I recommend checking out the below articles to get a taste of data engineering and why even data scientists need to acquire this skill:

Deep learning has made us all artists. No longer do we need expensive equipment to edit images and videos, computer vision and techniques like GANs bring creativity right to our doorstep.

“The Ken Burns effect is a type of panning and zooming effect used in video production from still imagery.”

Creating a Ken Burns effect manually is time-consuming and honestly quite complex. Existing methods require a lot of input images taken from multiple angles. Not ideal. So in this project, the developers have created “a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera”.

And no surprise to see that the implementation is in PyTorch, is it? You need to get on board the PyTorch bandwagon now to harness its full potential and give your deep learning career a major boost.

Open Source Artificial Intelligence (AI), NLP and Other Data Science Projects

Graphs have become an important part of the machine learning lifecycle in recent times. They are an effective and efficient method of analyzing data, building recommendation systems, mining social networks, etc. In short – they are super useful.

In fact, we at Analytics Vidhya are big proponents of graphs and have a collection of useful articles you can read about here.

Plato is a framework for distributed graph computation and machine learning. It has been developed by the folks at Tencent and open-sourced recently. Plato is a state-of-the-art framework that comes with incredible computing power. While analyzing billions of nodes, Plato can reduce the computing time from days to minutes (that’s the power of graphs!).

So, instead of relying on several hundred servers, Plato can finish its tasks on as little as ten servers. Tencent is using Plato on the WeChat platform as well (for all you text savvy readers).

Here’s a comparison of Plato against Spark GraphX on the PageRank and LPA benchmarks:

You can read more Plato here. If you’re new to graphs and are wondering how they tie into data science, here’s an excellent article to help you out:

HuggingFace is the most active research group I’ve seen in the NLP space. They seem to come up with new releases and frameworks mere hours after the official developers announce them – it’s incredible. I would highly recommend following HuggingFace on Twitter to stay up-to-date with their work.

Their latest release is Transformers v2.2.0 that includes four new NLP models (among other new features):

ALBERT (PyTorch and TensorFlow): A Lite version of BERT

CamamBERT (PyTorch): A French Language Model

GPT2-XL (PyTorch and TensorFlow): A GPT-2 iteration by OpenAI

DistilRoberta (PyTorch and TensorFlow)

As always, we have the tutorials for the latest state-of-the-art NLP frameworks:

This is a slightly different project from what I typically include in these articles. But I feel it’s an important one given how far away we still are from even getting close to artificial general intelligence.

ARC, short for Abstraction and Reasoning Corpus, is an artificial general intelligence benchmark that aims to emulate a “human-like form of general fluid intelligence”. This idea and the research behind it has been done by François Chollet, the author of the popular Keras framework.

Mr. Chollet, in his research paper titled “On the Measure of Intelligence“, provides an updated definition of intelligence based on Algorithmic Information Theory. He also proposes a new set of guidelines to showcase what a general AI benchmark should be. And the ARC is that benchmark based on these guidelines.

I think its a really important topic that will spur a lot of debate in the community. That’s a healthy thing and will hopefully lead to even more research on the topic and perhaps a big step forward in the artificial general intelligence space.

This GitHub repository contains the ARC dataset along with a browser-based interface to try solving the tasks manually. I’ve mentioned a couple of resources below to help you understand what AI is and how it works:

End Notes

So, which open source project did you find the most relevant? I have tried to diversify the topics and domains as much as possible to help you expand your horizons. I have seen our community embrace the deep learning projects with the enthusiasm of a truly passionate learner – and I hope this month’s collection will help you out further.

Personally, I will be digging deeper into François Chollet’s paper on measuring intelligence as that has really caught my eye. It’s rare that we get to openly read about benchmarking artificial general intelligence systems, right?


You're reading 6 Challenging Open Source Data Science Projects To Make You A Better Data Scientist

10 Best Practices For Data Science Projects

The 10 best practices for data science projects that assist you in resolving real-world problems

The field of data science has earned the reputation of being the next big thing in technology and business. In recent years, the number of businesses using data science applications has only increased.

We’ll go over some of these best practices for data science projects in this article, which businesses can use to boost their data science efforts’ success rates. There are many practices for data science projects. But first, let’s learn more about the idea of data science projects. We have enlisted 10 best practices for data science.

First Choice: To get the support of the business, begin with a quick-win use case.

You must focus on use cases that share three essential characteristics:

A business leader who is ready to take responsibility for their success. The champion is essential for establishing the use case’s business significance and gaining executive and general business support.

Clearly defined KPIs to evaluate the impact on the business of the results. These are necessary to show business stakeholders the measurable impact of your project before and after.

Accessible, available, and clean information. If your data isn’t of high quality or readily available, you run the risk of turning your quick-win into a data cleansing exercise, which is not something you want to do if you want to maintain the interest of the business.

Second Best Practice: Establish a strong Data Science organization and team.

Third Best Practice: Choose the right tools and metrics for the job.

When it comes to metrics, it’s important to choose the right ones to connect data science results to business objectives. Predictive algorithms, for instance, are frequently evaluated using the Root-Mean-Square-Error (RMSE) metric; however, depending on the underlying business objective, the Logarithmic-Root-Mean-Square-Error metric may yield superior results. Metrics, on the other hand, are typically business KPIs like revenue or cost for optimization algorithms.

Fourth Best Practice: Establish an early POC dashboard for business stakeholders.

Gaining business support necessitates the early creation of a POC dashboard for business stakeholders. Participate in a Design Thinking workshop with business stakeholders to begin your project to accomplish this. During this meeting, come up with concepts and think about what a dashboard that is the result of the project would mean to them.

Fifth Best Practice: Spread the word widely and frequently.

Through regular reviews of the project’s progress, you can maintain business buy-in. In these reviews, let a stakeholder in your company lead the presentation. Instead of presenting the results in code, make use of your POC dashboard to present them in business language.

Sixth Best Practice: Use an Agile strategy.

An Agile Data Science approach should be used to guarantee consistent progress. This means that your project should be broken up into sprints of two to three weeks, with sprint reviews at the end of each cycle to show examples of the results achieved and Agile task planning. To contain the project’s scope, manage risks, and reduce uncertainty, invite all stakeholders to the sprint reviews and sprint planning.

Seventh Best Practice: Make provision for adaptable infrastructure.

When scaling up is necessary, the necessary infrastructure is not readily available, which is one of the primary reasons why Data Science POCs do not progress into the real world. The POC is then put on hold until infrastructure is acquired, which typically takes a long time or until the POC is forgotten.

Eighth Best Practice: During the POC phase, ask operational questions.

Find answers to operational questions like how often models will need to be tuned, how much data will be ingested (e.g., streaming vs. a scheduled job), how much data will be produced, and how much hardware will be needed during the POC.

Ninth Best Practice: Prepare a strategy to put your POC into action.

From day one, begin planning how to put your POC into production, and include a production plan in your final POC sprint review. You may be working with a subset of the data during the POC period; to implement your POC in the real world, you must also consider other data requirements, such as governance, volume, and the role of data stewards.

Tenth best practice: Optimized actions should replace insights and predictions.

Why Does Dataops For Data Science Projects Matter?

Applying DataOps practices to data science projects can have a potential impact on businesses.

Today organizations are carrying out more and more data projects that promise great opportunities to drive agility and competence. But they are facing a growing pressure to extract meaningful insights from data. Most of them realize the potential of data science to deliver business value, even some are already investing heavily in

DataOps for Data Science Success in an Enterprise

Translating structured or unstructured data into business and operational insights, and subsequently incorporating them into a data monetization value chain is a very complex task. Even data analysis by companies doesn’t produce much value for them. According to Gartner, 80 percent of analytics is likely to not deliver business outcomes through 2023, and only 20 percent of data insights will deliver business outcomes through 2023. In this regard, DataOps emerges as an agile way of developing, deploying and operating data-intensive applications, helping in fostering a data factory mindset. This is also orchestrating, monitoring and managing the data pipeline in an automated way for everyone handling data.

Today organizations are carrying out more and more data projects that promise great opportunities to drive agility and competence. But they are facing a growing pressure to extract meaningful insights from data. Most of them realize the potential of data science to deliver business value, even some are already investing heavily in data science programs . There is no wonder that the landscape of data is growing rapidly and processing and analyzing that data requires a vital approach. This is where data scientists step in performing data visualization , data mining, and information management. As most companies view a significant return from data science investments, most data science implementations are high-cost IT projects. Meanwhile, they often not generate value for businesses. Therefore, experts are now talking about DataOps , a new and independent approach to delivering data science value at scale. DataOps arises from the need to productionalize a rapidly increasing number of analytics projects and then to manage their lifecycles. With the introduction of DataOps, data scientists and data engineers can work together and can bring a level of collaboration and communication to generate actionable insight for a business. Significantly, DataOps is driven by data lifecycles and insights. It basically applies the DevOps process to data pipelines, using automation and Agile methodology to cut the time spent fixing issues in pipelines as well as get data science models into production quicker. Despite this, both are carrying distinct features and capabilities. While DevOps is the collaborative process between two technical teams, DataOps simplifies collaboration between data analysts, engineers, and data scientists, among others within an organization who use data. This essentially makes DataOps a much more multifaceted process than DevOps.Translating structured or unstructured data into business and operational insights, and subsequently incorporating them into a data monetization value chain is a very complex task. Even data analysis by companies doesn’t produce much value for them. According to Gartner, 80 percent of analytics is likely to not deliver business outcomes through 2023, and only 20 percent of data insights will deliver business outcomes through 2023. In this regard, DataOps emerges as an agile way of developing, deploying and operating data-intensive applications, helping in fostering a data factory mindset. This is also orchestrating, monitoring and managing the data pipeline in an automated way for everyone handling data. For a majority of organizations, DataOps currently is slowly becoming a crucial practice to endure in an evolving digital world, where coping with real-time business intelligence is necessary to gain a competitive edge over peers. Instability of data, rapidly evolving technology landscape, and increasing demand of the Agile business ecosystem are few reasons surging the need of DataOps. IBM DataOps , for instance, enables agile data collaboration to accelerate speed and scale of operations and analytics throughout the data lifecycle. This also assists in creating a business-ready analytics foundation by offering market-leading technology that works together with AI-powered automation, infused governance, data protection, and a robust knowledge catalog to operationalize relentless, high-quality data across the business. Comprehensively, applying DataOps practices in all data activities, from data management and integration to data engineering and data security, enterprises can simplify the process of Data Science across an organizational level.

6 Considerations When Evaluating An Intent Data Source

6 Considerations When Evaluating an Intent Data Source John Steinert


Share This Post

Whether you’re new to intent or have been experimenting with different sources for a while, our clients have found it helpful to evaluate potential additions to their stacks in terms of six considerations.

The first 3 intent data considerations here speak directly to the overall importance of intent data and the last three focus more on its specific value to sales teams.

How intent data can drive real change and better outcomes for your organization #1 – Actionability

One huge difference between behavioral data and many of the feeds you might pump into your stack is that it changes so rapidly. Given this rate of change, to maximize value, the data has to both inspire you and enable you to act with confidence.  When adding a data source, make sure it provides everything you need to react quickly to new insights.

#2 – Substance

There’s plenty of data available around that might increase what you know about a particular account. The question you need to be asking about any data source is whether or not the addition will truly help to either reinforce your current decisions, or conversely, provide a good rationale for near-term changes. If a new source lacks the precision necessary to change your own behavior in some substantive way, it may just be another “nice to have” that you can do fine without.

#3 – Revenue

When your marketing team becomes better aligned with sales, its inspirations and instincts undergo real change. Instead of being obsessed only with outputs, marketing too becomes focused on business outcomes. This starts delivering more real opportunities into the pipeline. Even more, it means delivering more revenue out the end. Purchase intent data’s primary purpose is to deliver more revenue to your company. Before adding it to your stack, make sure you understand clearly how directly and quickly that added data can make that a reality for you.

Real intent accelerates sales success by exposing the buying behaviors and sentiments of a buying team

Great salespeople are super-adept at turning opportunities into closed/won deals. They’re better than most at assessing need and they’re experts at engaging very specific people with highly relevant outreach. They’re able to adjust and refine their interactions quickly. They create far more win-win conversations across buying teams.

Real intent data accelerates sales success precisely because it supplies the right intelligence a seller needs to make them better at doing their job. So to deliver on this promise, real intent data must obviously be:

#4 – Relevant and accurate

While it can be argued that any information about an account could be useful to a seller, in practice, more information isn’t necessarily better. It takes time to process. It can lead to missteps caused by a perception that a topic matters to the prospect when it really doesn’t. It can waste time.

If the data source is not exceptionally relevant to the types of conversations that sellers use to gain meetings and create opportunities, chances are it’s not as useful as the supplier suggests.

#5 – Precise and prescriptive

Once you’ve made sure that a data source can be vetted appropriately for GDPR, CCPA and other evolving privacy concerns, you can begin evaluating it in terms that matter to your user constituencies.

In jobs where time is especially costly – like many sales functions — the more specific your inputs can be, the better. Behavioral signal sources that come packaged with words like “may” or “seem” or “usually” and the like can easily confuse your colleagues. Instead of immediately taking action, they need to evaluate the material and think through how to incorporate it. This costs precious time.

When evaluating a source of data, look closely at how precise it is. Determine exactly what it can tell you that will be immediately useful to a salesperson. Look for information that they can use that will change what they otherwise would do or say. If the data is not precise enough, it won’t drive change. The right data is like a prescription – it should be obvious that if you use it, you will have a better chance at success.

#6 – Information rich

We strongly recommend that all clients have good data hygiene processes in place because that will raise the overall average usability of each prospect or contact record available to your marketers and sellers. But for most companies, given resource pressures, data hygiene alone is not enough to raise productivity to where it needs to be.

Out of the gate, a good intent data source clearly separates your prospecting names into two very distinct groups – 1) those not currently showing purchase intent who demand less of your immediate attention and 2) those that are showing purchase intent who you need to focus on if you want to grow more revenue and share.

Simply pointing your users towards the right accounts is only one small step better than a broad TAM (total addressable market), a well-defined ICP (ideal customer profile) or even within your named- or target-accounts ( that you’re using for account-based marketing (ABM) or very specific sales programs). The right data source can take you beyond a directional ranking of accounts all the way down to what your prospects actually care about from a variety of angles.

That’s how your teams can best discover more opportunities far more efficiently. Each account might have hundreds of possible contacts within it showing light search behaviors. You need to know which ones actually matter.

With technology changing and converging faster and faster, broad search terms aren’t enough because each account may show interest in any number of generally related interests. You need to know exactly what the real researchers are actually reading.

And if your chances change given who else is under consideration, the more you can know about your competition in the context of a developing deal, the better prepared you will be. These 6 intent data considerations will hopefully equip you to make the right decisions when it comes to adding a new intent data source into your stack.

actionable purchase intent, intent data, purchase intent data

18 Useful Mobile Apps For Data Scientist / Data Analysts


Does your passion lie in Data Science /Analytics?

Currently, data science and machine learning are changing the world. Here’s your chance to live your passion. To become better at what you do, you no longer need to stick around your laptop for long hours. Take a break and switch to faster way of learning. Switch to Mobile Apps.

Did you know you can run Python in your phone?

You heard right. Mobile apps have added immense excitement to our ways of learning. The subjects which were considered difficult to understand, are now taught using pictures and stories in your mobile / tablets. You can access them anywhere, whether you are in a bus, car, train, restaurant or anywhere else. All you need is a earphone to plug in (if necessary).

In this article, I’ve shared some useful apps (I found) which can improve your necessary data science / analytics skills. These apps can improve your listening skills, logical skills, decision making skills, mathematical skills, statistical skills and much more. They are much more powerful than one could imagine.

I’ve grouped these mobile apps in various categories. I presume, you know your weak areas, so this would help you to target those sweet spots. These are android apps available for FREE at google playstore.

Note: We do not intend to promote any mobile app using this article. These apps have been selected on the basis of their ratings and performance. 

Mobile Applications

Brain Training

It is a personalized brain training program designed to improve your cognitive skills. Your brain training session include 3 exercises every day. The exercises are selected on your past performance. If you are found to be weak in certain aspect, you can expect those exercises to repeat frequently.  This app has various training programs but are limited in FREE version. Still, it succeeds in providing an exhilarating experience.

Lumosity is a personalized brain training program and has over 40 fun games. These games are intuitive enough to challenge your core thinking. This app will help you to improve reading, writing, logical and mathematical skills. The personalized training programs are really interesting and addictive. You get 3 exercises for brain training everyday. Advanced exercises are unlocked, which can be availed with payment.

This is a fitness app for your brain. This can help you with improvement in memory, concentration, logical thinking and intelligence. It comprises of various set of exercises, creatively designed more than 60 programs. It also allows you challenge your peers and measure your weekly performance. You’ll get limited programs in unpaid version. Still, they are worth trying. If you sincerely use these apps, for few minutes everyday, it can bring a huge change in your life.

Do you want to become good with numbers? This app would help you with that. You should now feel comfortable doing numerical calculations of any type. This skill helps a lot in all stages of lives. In short, you must develop your mental math skills. Train your mind such that it can do numerical computation on finger tips. This is a beginner level app. It has various interesting exercises to help you acquire mathematical intuition.

This app allows you to run python on your mobile. It assists your android device to run python scripts and projects. Having been highly appreciated on play store, this app works best on python 2.7. It consists of Python interpreter, console, editor, and the SL4A Library for Android. It also includes many useful python libraries. It can also execute python code & files from QR codes. Thus, you are no longer limited to run python by your machines.

You are no longer required to stick to your machines to learn Python. Here’s a python tutorial for your android device. This tutorial covers python basics, data types, control structures, function and modules, exceptions, working with files, functional programming, object oriented programming and much more. For an enhanced learning experience, this tutorial includes fill in blanks, true false, re-arrange and question-answers. It’s a great resources for beginners interested in python.

Just like Python, you can learn R too on your android devices. This app introduces you to basics of R Programming. Consider it as a shorted version of swirl in R. I’d recommend this app for complete beginners. It consists of Vectors, Functions, Matrices, Factors, Data Frames, Lists and much more. After completing this tutorial, you’d become ready to undertake your first data analysis.

Termux offers a powerful interface with an extensive Linux package collection. This app allows you to run text based games with frotz, projects with git and subversion, use python console, access servers, edit files with nano and vim and much more on your android devices. You can also install desired packages using an in-built apt package manager known from the Debian and Ubuntu Linux distributions.

Statistics & Mathematics

This app is for beginners in data science / analytics. Consider this app as your refresher on various statistical measures. It comprises of topics such as frequency distribution and graphs, data description, probability, distributions, estimations, hypothesis testing and much more. If you are preparing to take an exam, this can be an ideal tutorial for you.

After you have acquired basic knowledge of statistics, this app would make more sense to you. This app allows you to undertake many probabilistic functions in your android devices. It requires prior knowledge of probability distributions such as binomial. It allows you to compute probability mass function for poisson, hypergeometric and plot functions for gaussian, t test, chi-square, log-normal distributions and much more.

It is the statistical calculator for your android devices. This help can help you calculate various statistical metrics such as sample size, statistical distribution table, statistical analysis and much more. Using this app, you can save your time in doing calculations. It does in seconds. It is a handy tool for basic and well as scientific statistics. This app supports only .csv files. But, I think that should suffice with our data analysis requirements.

You can watch khan academy videos on your android devices as well. As an aspiring data scientist, you must look for linear algebra, probability and statistics course on khan academy. This should help you to get prepared for rigid mathematics calculations in machine learning algorithms. This has a redesigned interface which delivers an enhanced user experience. You can also sync your chúng tôi progress which this app.


You must have well browsed their (MOOCs) website to undertake some data science / analytics course. A lot of you wouldn’t know that you can continue your learning on your android devices as well. Below are some useful apps of popular open courses:-

Udacity offers a wide range of courses. Now, you no longer need to wait for laptop accessibility. You are great if you have an android device. You can simply undertake and complete courses in your mobile as well. There is no difference in learning on their website vs app. Their app’s user interface is nice and easy to use.

Millions of students are acquiring new skills from Coursera. To improve their learning experience, coursera also has ensured their android presence. Using this app, you can get timely notifications directly in your android device. With this app, you can learn anytime, anywhere without waiting for the best time to study. You would find all sort of features which coursera has enabled on their website.

If you are edX subscriber, you would like to download this app and continue your learning experience on your android device as well. This app allows you to download course videos, receive instant notifications, stream class videos via Wifi or cellular connection and much more. Like others, edX has made learning much more accessible and enjoyable.

Udemy (Download – 5million, Rating – 4.3, Size – 27.47 MB)


Analytics Vidhya (Download – 10k, Rating – 4.4, Size – 6.5 MB)

This is Analytics Vidhya’s own mobile application. We are including it in this list as we feel this will be super useful to any data scientist who uses it. It has all the blog posts, including AVBytes, that we publish on AV. You can also save articles to read later. It’s an ideal companion for your data science journey.

Frequently Asked Questions

Q1. What app is used for data science?

A. JupyterLab is commonly used for data science tasks. It provides an integrated development environment (IDE) with support for multiple programming languages, including Python, R, and Julia. JupyterLab offers features like notebooks, code editors, data visualization tools, and terminal access, making it a versatile and widely adopted application for data scientists to analyze, visualize, and experiment with data.

Q2. Can we do data science in mobile?

A. Yes, it is possible to perform data science tasks on mobile devices. Several mobile apps and platforms are available that allow for data analysis, visualization, and even running machine learning models. These apps provide a simplified interface and may have limitations compared to desktop environments but can still be useful for basic data exploration, on-the-go analysis, and learning purposes.

End Notes

It might difficult to download all these 18 apps in your android devices. Device memory posses untimely challenges sometime. I understand. Hence, you must consider your areas of improvement and accordingly work towards it. To reap maximum benefits from these apps, you must follow a disciplined schedule. Dedicate few minutes of daily life in these training programs to learn useful things everyday.


Beginner’s Guide To Get A Data Science Internship

This article was published as a part of the Data Science Blogathon.


Generally whoever is pursuing data science would want exposure, an opportunity in this field to feel right, motivated in moving forward, and becoming a renowned data scientist. One of the biggest and meaningful opportunities a student can get in this field is being opted as a Data scientist intern. There are other training and tasks that you can do to sharpen and strengthen your profile/resume but as I am a Data Scientist Intern starting from the month of February 2023, I will share with you my thoughts, the journey that I took; to at least get eligible for this post.

About my Phase

To start off, let me make this very clear that I am not some tech genius, who is on the computer coding from class 6 or 8 or even 11. I am an artist and have always been one, singing for the past 8 years, doing theatre for the past 7 years, dancing classical and lyrical, sketching, and all kind of creative skills similar to these most popular art forms. So definitely, I was behind most of my peers during the initial phase of my computer science engineering at my university (UPES). Throughout the whole first year, I was doing average not anything good enough.

Then comes the second year, the time when I started looking for things that interest me in the technical field. I shortlisted Mobile app development and artificial intelligence, Mobile app dev; because it seemed really cool, I thought I could make those apps that people will use in their daily lives on their phones, and Artificial intelligence because secretly I have been in love with human psychology for a long time so I used to study a lot about it on my own, and when I encountered that people have started putting together the functioning of a neuron in technology (Neural Network). I felt a chill down my spine from excitement. I gave both these fields their individual attention and time.

Seed is Sown

We all at some point in time face such dilemmas, a choice that we don’t know how to make, For me, the deciding factor became my unease feeling when I couldn’t satisfy the curiosity to know more about neural networks and was doing mobile app dev at that time. Once I realized that mobile app is not my priority looking at my excitement for AI, I took a U-turn from app dev and started my first course on Neural Network, I didn’t do any machine learning beforehand cause didn’t know much about it Therefore started straight from Neural Networks with Pytorch. Believe me, I really enjoyed learning about the theory on Neural nets and Deep learning in general but when it came to coding with Pytorch, I couldn’t understand any of its functioning, I had to memorize when, where and which functions to use while coding neural nets to satisfy myself that I know how to code a neural network.


Then came the Covid-19‘s lockdown (22nd March 2023), Wow! what a blessing for me. I already had this big flame lit up inside me to study Deep learning and when I got stuck at my PG in Dehradun due to lockdown, I created a routine so tight, a habit so consistent that I used to study and code for 12-14 hours a day. This was the first time in my life I was enjoying studying so much that all those hours seemed like nothing that could exhaust me, this perseverance was maintained till February of 2023 and things changed after I got my Intern as a Data Scientist for a wonderful Hospitality Start-Up: “Upswing Cognitive Hospitality Solutions”.

Here are the things that I did during my “Zone” (as I like to call it referencing the word from psychology) and think will help you make your skills in Data Science and Machine learning really sharp and useful.

1. Create a Map for yourself

As I told you before my start was off, I started with Deep learning where instead I should have picked a path from basics to Advance, this way your brain learns step by step, and things are understood concretely. So, take your time and discover different directions that are possible with Data Science, machine learning chúng tôi There will be plenty of paths possible but don’t get very particular with all of them; just segment things of your interest (in my case It was Deep learning with Data Science) into basics, Intermediate, Advanced.

Start with basic things and stay on track, first cover all the basic topics of your interest and then solve problems based on those topics without any help. First, you must feel comfortable with what you are currently doing and then make a shift in terms of the difficulty of topics.

2. Keep developing other skills besides this in parallel

Adding ‘Scientist’ at the back of ‘Data’ is not particularly something that you can do after learning few libraries in Python or R or any other data science supporting language. A data scientist must know how to integrate different technologies to achieve the final outcome of the problem. What I mean by this that, you should be familiar with Databases, Git, Github, Deployment related Tech, may it be basic web dev to host your application online or docker to make a container and deploy it on the cloud and stuff.

I am not asking you to learn everything, if your end goal is something different than all this, discover things that are required for your goal along with Data science concepts and coding skills. A must skill I believe every data scientist should focus on is writing, it’s a basic required skill for a data scientist to create a report at the end of a project for their stakeholders, and that report’s presentation is one of the most important steps inside the complete work cycle for a data scientist.

3. Don’t get stuck on one Medium

What I mean by this is everyone has their comfort zone in terms of how they learn things, be it be videos or books or etc. But confiding yourself with only one form of a medium can be restrictive. There are brilliant, absolutely piece of art kind of books available that you should be keen on reading even if you like studying from online videos. This flexibility will help you more than you can realize, reading research papers, blogs, and all.

For people who learn from reading, you can check out some great video courses mentioned below to visualize the concepts with such ease and fun.

4. Socialize Yourself

This step is particularly related to increasing your chances to get Intern positions or even jobs. We can only do so much with our time, if we brand ourselves through our work and social relations, we increase our chances exponentially of getting spotted and offered an opportunity.

This same thing happened to me, In my 5th Semester, I scored 96 in the Python End semester Exam, so when the company reached out to some of the faculties at my university, I was recommended by my python teacher to the in charged teacher and she took a shot with me, after that I gave my interview and got selected as the Intern.

5. Learn Beyond Common

Keep your researcher side active while learning the concepts, Data Science, Machine Learning, and Deep learning are having extensive research going on continuously in every corner of the world. So, keep a broad mind and learn things beyond the steps of the data science work cycle. I am saying this because no knowledge that you gain goes to waste and integration of your knowledge from different stages and dimensions of your life makes you who you are today, along with that it gives you a unique identity and thought process. So, Utilize it.

I am mentioning a couple of things that I learned alongside:

Responsible AI (Ethics in AI)

How people perceive different kinds of Visualization (Visualization Wheel Dimensions)

6. Learn from the Best resources

Video Courses:

Youtube Channel, freeCodeCamp

Coursera Courses :

IBM Data Science Professional Certificate

Applied Data Science from Univerity of Michigan courses if you are interested in Deep learning

Data Science A-Z on Udemy by Kirill Eremenko

Applied Data Science by IBM

For more recommendations, you can contact me on LinkedIn

Data Camp: My favorite resource for Data science. Explore it to your hearts’ content, you will love doing and learning data science on DataCamp.

Reading Courses:

Practical Statistics for Data Scientists 50+ Essential Concepts using R and Python by Peter Bruce, Andrew Bruce, Peter Gedeck

Python Data Science Handbook by Jake VanderPlas published by O’Reilly

The Art of Statistics Learning From Data by David Spiegelhalter

The visual Display of Quantitative Information by Edward R. Tufte

Data Mining Practical machine learning tools and techniques by Ian H. Witten & Eibe Frank

7. Take Proper Notes

This point is self-explanatory, You can’t possibly remember every single thing that you read, learn or study. So to make your personal Search engine (Brain) more efficient and faster taking notes properly is the best way. You will feel more powerful psychologically whenever you see your notes, they depict your hard work, progress and so much knowledge that you have gained till now.

8. Conquer in Steps

You need to feel satisfied with yourself now and then, to keep yourself pushing forward and not letting the flame of learning sincerely vanish. I have seen a lot of people getting scared or tired or just uninterested in work hard after some time. According to my views, This usually happens when you feel that you haven’t reached the goal and you keep on walking without appreciating where you are standing right now, how far have you come through your dedication and hard work.

Therefore Try to set small goals and once you clear them, be proud because you are the best version of yourself right now, not giving up and walking forward with happiness and satisfaction in mind.

9. Contribute to Communities

Just like you are studying from many wonderful resources, why not contribute after a certain point of knowledge and become one yourself for even a single person. The act of sharing knowledge is not good to keep the flow of fresh knowledge alive, but also to make name for yourself. These contributions will give you such importance that nothing else could. Psychologically you will feel really powerful and that would reflect more on your upcoming work. It keeps the learning process strong and sharpens your overall image as a Data scientist or anything else.

Few examples of such communities are, Kaggle, Paperspace, Analytics Vidya, Medium, etc.

10. If possible find a Mentor

Well, this one is particularly not an easy task, but it is an extension to the earlier step of “learning from Best resources”. When you have someone (an expert / or even a more experienced person than you) it drives you in the most optimized direction for your learning, you wander less and grasp more. The best way is to reach out to as many people as possible on LinkedIn (Don’t beg them or irritate them, just be clear and straightforward with what help you need from them).

11. Believe in Yourself

I am mentioning the MOST Important step in the END because even if you understood all the above-mentioned steps except this, you could possibly fail or get lost in so many things that I would definitely not want for you. So, however long it takes, if you are clearing your daily, weekly goals, expanding your network of people,


That was the END of this article, I hope you learned something or the other for your OWN Journey. Share it with me anytime through LinkedIn.

Gargeya Sharma

For more info, check out my GitHub Home Page

LinkedIn        GitHub

Blog Cover Photo by Mantas Hesthaven on Unsplash

Zone Photo by Paul Skorupskas on Unsplash

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 


Update the detailed information about 6 Challenging Open Source Data Science Projects To Make You A Better Data Scientist on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!