Trending February 2024 # Emerging Of Traditional Learning With Digital Learning # Suggested March 2024 # Top 2 Popular

You are reading the article Emerging Of Traditional Learning With Digital Learning updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Emerging Of Traditional Learning With Digital Learning

Introduction to Digital Learning Digital Learning

The 21st century has been rightly termed the digital era. With the internet substantially changing people’s lives, we are heavily dependent on technology to complete simple tasks. Most of you must have heard about digital learning. E-Education has certainly ignited the teaching sector. Gone are the days of blackboards, chalk, and dusters. They have been substituted with web-based education, strengthening students’ learning experience.

Start Your Free Marketing Course

Digital marketing, conversion rate optimization, customer relationship management & others

Teachers should be familiar with Digital Learning Tools.

What is more beneficial about web-based digital learning education is that it has allowed educators to increase their efficiency and productivity. First and foremost, the teacher must get accustomed to the various digital learning tools and e-education tools and programs before they can apply them to teach students. Therefore, it is an optimal opportunity for the learners and the trainers to empower themselves with hybrid learning or digital content resources.

However, to familiarize yourself with the different digital tools, you must give yourself time to learn them. In addition, you also need to decide whether the teaching methodology is suitable for the students or the particular subject. Then you can choose the best teaching method to impart knowledge to your students.

Some Institutes Offer Training Sessions

The school or the institute which has decided to opt for the e-learning teaching procedure does have a vital role to play here. Either they can hire teachers and tutors with comprehensive knowledge of e-education tools and methods or make proper arrangements to train the teachers about using digital programs effectively. The training session is for senior and old teachers who have been teaching there for several years. Furthermore, you can also take help from the internet to stay on trend with this new technique and teach others. As such, most private school administrators and management are more than happy to adopt this new technology which is the need of the present hour.

Important Aspects

Now you can look at some vital factors that guide the E-education procedure and separate it from conventional teaching methods. These include-

Freedom to choose the place

Digital learning or e-education is unlike traditional classroom teaching, where every period must be forty minutes long. Still, it has given the freedom to both the students and the teachers to choose their place. You can take online classes anywhere at your convenience, including attending them at home. However, this is applicable for mostly professional courses and not for school-going children.

No Restriction of Time Learn at your Own Pace

There is no need to compete with the rest of the class. Rather, e-learning allows you to learn at your own pace. The tutorial videos are available online, and you can view them as many times as you want to make your concepts clear about a topic. This means you do not have to spend much time on one lesson as the chapters become more interactive.

The Digitalized Content

Digitalized content means high-quality academic content that is easy to read and understand. It is delivered through various technical tools such as computers, laptops, smartphones, and other electronic gadgets. Highly experienced academic writers write the content and are very informative and supported by videos and images for better understanding.

The Main Technical Tool

The Internet is the main technical tool behind digital learning as its backbone. You need a computer or other gadget with an internet connection to start the e-learning process. However, it must be remembered that the internet is just a medium or tool and not instruction.

Online Tutors or Instructors

No teaching or learning can be considered complete without the presence of a teacher, and digital learning is no exception. Only their roles have changed, but the basic responsibility remains to educate digital learning students. He can provide personalized guidance to every student.

Why Did All Schools Need to Adopt Digital Learning?

There has been a thorough discussion that all educational institutes and academic centers should implement e-education which has gained immense popularity and support over the last few years. It will get more assistance as the number of internet users is increasing daily.

Support from the Government/Administration Some other Wow Factors about Digital Learning

Following are some other wow factors about digital learning:

Digital Learning has made Research Work Simpler

Beyond any doubt, the digital learning procedure has made research work simpler, specifically in medical science, information technology, and space. With information easily available online, you can find a solution to any problem or question and carry your work further. Research scholars can easily prepare their thesis or dissertation without much difficulty. All the facts and statics on the internet are the latest, adding impetus to your work.

Digital Learning a Boon for the Parents as Well

The gift of digital learning has proved to be a blessing not only for teachers but even for parents who usually guide their children at home and help them with their studies and homework. They can consult the internet and online tutorials anytime to teach their kids better. Moreover, e-learning, because of its interactive nature, becomes a fun-laden game for children who can grasp the topic more quickly than traditional methods.

Preparation for Exams becomes Easier Reduce the Burden of Paper Work

The introduction of e-learning in the present scenario has led to the reduction and elimination of the burdensome paperwork which was part of the earlier system. Since most exams are online, the teachers do not need to carry answer sheets to their homes for evaluation. Moreover, today’s questions are usually Multiple Choice types (MCQs), which can be evaluated on the computer within no time. Because of this, the results are also declared very early, with more transparency and accuracy than traditional checking paper.

Offers a Strong Platform for Better Communication

Increased digital and internet-based learning has provided a strong platform for better communication between teachers and students. You can live to chat with the tutor with the help of a webcam and using various applications such as Skype or Google Hangout. Moreover, communication can be enhanced even more with the help of social media platforms, blogs, and other discussion forums. You can clear your doubts about assignments and exams to better prepare for the subject.

What is Digital Pedagogy? Making Digital Learning Successful Spreading Awareness amongst the Masses

Although digital learning has been identified as one of the most popular paradigms and accepted globally, a lot still needs to be done to make more and more people aware of it. There are many developing and under-developed countries where this concept is still unknown to most of their population.

Making more and more People Computer Literate

Digital Learning cannot be successful without proper computer knowledge and training. But unfortunately, not everyone is familiar with this electronic machine and knows to operate it properly. Thus, the first job is to make more and more people computer literate. Computer education should be a compulsory part of the curriculum in every school. In addition, computer training centers should encourage people to join it to learn the internet.

Digital Learning Courses should be more User-Friendly

The digital learning courses’ content and quality should be relevant and meaningful. It must be designed considering the learner’s learning ability and age. In short, it should be user-friendly and something which can be easily understood.

One way to do this is to use various e-learning/authoring tools that would make learning interesting.

The Attitude of the Teacher

The instructor’s attitude also increases students’ anxiety or curiosity about internet-based learning. He should always encourage his students to use new tools while learning and enhancing knowledge.

Bringing more Diversity in Courses and Assignments Conclusion Recommended Articles

This is a guide to Digital Learning. Here we have discussed Institutes Offer Training Sessions, Important Aspects of  Why All Schools need to Adopt Digital Learning and Some other Wow Factors about Digital Learning. You may look at the following articles to learn more –

You're reading Emerging Of Traditional Learning With Digital Learning

Learn With Linux: Learning Music

Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software.

Learning music is a great pastime. Training your ears to identify scales and chords and mastering an instrument or your own voice requires lots of practise and could become difficult. Music theory is extensive. There is much to memorize, and to turn it into a “skill” you will need diligence. Linux offers exceptional software to help you along your musical journey. They will not help you become a professional musician instantly but could ease the process of learning, being a great aide and reference point.

Gnu Solfège

Solfège is a popular music education method that is used in all levels of music education all around the world. Many popular methods (like the Kodály method) use Solfège as their basis. GNU Solfège is a great software aimed more at practising Solfège than learning it. It assumes the student has already acquired the basics and wishes to practise what they have learned.

As the developer states on the GNU website:

“When you study music on high school, college, music conservatory, you usually have to do ear training. Some of the exercises, like sight singing, is easy to do alone [sic]. But often you have to be at least two people, one making questions, the other answering. […] GNU Solfège tries to help out with this. With Solfege you can practise the more simple and mechanical exercises without the need to get others to help you. Just don’t forget that this program only touches a part of the subject.”

The software delivers its promise; you can practise essentially everything with audible and visual aids.

GNU solfege is in the Debian (therefore Ubuntu) repositories. To get it just type the following command into a terminal:

The number of options is almost overwhelming. Most of the links will open sub-categories

from where you can select individual exercises.

There are practice sessions and tests. Both will be able to play the tones through any connected MIDI device or just your sound card’s MIDI player. The exercises often have visual notation and the ability to play back the sequence slowly.

Solfège could be very helpful for your daily practise. Use it regularly and you will have trained your ear before you can sing do-re-mi.

Tete (ear trainer)

Tete (This ear trainer ‘ere) is a Java application for simple, yet efficient, ear training. It helps you identify a variety of scales by playing thhm back under various circumstances, from different roots and on different MIDI sounds. Download it from SourceForge. You then need to unzip the downloaded file.




Enter the unpacked directory:




Assuming you have Java installed in your system, you can run the java file with





your version


(To autocomplete the above command, just press the Tab key after typing “Tete-“.)

Tete has a simple, one-page interface with everything on it.

You can choose to play scales (see above), chords,

or intervals.

You can “fine tune” your experience with various options including the midi instrument’s sound, what note to start from, ascending or descending scales, and how slow/fast the playback should be. Tete’s SourceForge page includes a very useful tutorial that explains most aspects of the software.


Jalmus is a Java-based keyboard note reading trainer. It works with attached MIDI keyboards or with the on-screen virtual keyboard. It has many simple lessons and exercises to train in music reading. Unfortunately, its development has been discontinued since 2013, but the software appears to still be functional.

To get Jalmus, head over to the sourceforge page of its last version (2.3) to get the Java installer, or just type the following command into a terminal:



chúng tôi will be guided through a simple Java-based installer that was made for cross-platform installation.

Jalmus’s main screen is plain.

You can find lessons of varying difficulty in the Lessons menu. It ranges from very simple ones, where one notes swims in from the left, and the corresponding key lights up on the on screen keyboard …

… to difficult ones with many notes swimming in from the right, and you are required to repeat the sequence on your keyboard.

Jalmus also includes exercises of note reading single notes, which are very similar to the lessons, only without the visual hints, where your score will be displayed after you finished. It also aids rhythm reading of varying difficulty, where the rhythm is both audible and visually marked. A metronome (audible and visual) aids in the understanding

and score reading where multiple notes will be played

All these options are configurable; you can switch features on and off as you like.

All things considered, Jalmus probably works best for rhythm training. Although it was not necessarily its intended purpose, the software really excelled in this particular use-case.

Notable mentions TuxGuitar

For guitarists, TuxGuitar works much like Guitar Pro on Windows (and it can also read guitar-pro files).


Piano Booster can help with  piano skills. It is designed to play MIDI files, which you can play along with on an attached keyboard, watching the core roll past on the screen.


Attila Orosz

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

8 Principles Of Deeper Learning

Key elements that teachers can use in instructional design to guide students to engage more deeply with content.

As educators we all recognize the importance of sparking a student’s curiosity and motivation to learn. We know that when students are provided with opportunities to undertake meaningful tasks to solve real-world problems, engagement soars.

But teachers today are under a great deal of pressure to cover standards so that students pass a test that measures proficiency. In many cases, curriculum and instruction have been stifled by strict pacing guides and a focus on discrete learning.

I collaborate with educators weekly who share the deeper learning that can be achieved when they are empowered to design meaningful learning experiences for the students they serve. Teachers share lessons that are authentic, hands-on, challenging, and purposeful. These lessons address more than standards: They focus on many of the soft skills we know are critical for student success in college, career, and life—skills such as being able to collaborate, create, solve problems, communicate effectively, and persevere in tasks.

There are actions we can take now to empower teachers to achieve these deeper levels of learning. Through intentional instructional design we can guide students to think critically about arguments, concepts, and ideas and to create solutions to real-world problems.

8 Steps to Deeper Learning

1. Learning goals and success criteria: Any great lesson begins with clear goals for what students need to know and be able to do. Goals, coupled with criteria for success, should be communicated to students in a manner that clarifies our expectations and serves as a guide for self-assessment.

2. Compelling content and products: Beyond discrete standards, teachers have the opportunity to use content and performance expectations to create real-world problems or situations for students to solve. Learning experiences that offer authentic, interdisciplinary tasks provide relevance and promote curiosity for students.

3. Collaborative culture: Learning is social, and the purposeful inclusion of collaboration throughout the learning process is highly engaging for students. There are endless design options for collaboration, including flexible groups, partners, peer tutoring, Socratic seminars, academic discussion, and online experts.

4. Student empowerment: Students’ ownership of their learning increases exponentially when they’re given choice over how to show mastery or create a final product or performance, including using digital tools and resources. Additionally, inviting students to provide input into what they learn and how they engage with content allows them to play the role of co-designer.

5. Intentional instruction: Evidence-based strategies should be carefully selected in order to have the greatest impact on the learning goals. One such strategy is the gradual release of responsibility (GRR) model, which provides structure for direct instruction and modeling (“show them”) and guided practice on a task (“help them”), before students attempt it independently (“let them”).

6. Authentic tools and resources: Students should have access to a variety of tools and resources, both print and digital, throughout the learning process and when creating products to demonstrate their learning. Providing a variety of tools offers students choice and emphasizes process over product. Digital strategies such as blended learning and flipped classrooms offer rich experiences that are highly engaging and honor how students like to learn and create.

7. Focus on literacy: Regardless of the content, reading, writing, and speaking should be incorporated into every learning experience. Expose students to multiple texts, primary and secondary sources, and online resources. Engage students in opportunities to write often—e.g., by assigning lab reports, technical manuals, narrative stories, research summaries, opinion papers, or interactive notebooks.

8. Feedback for learning: Throughout the learning experience, there are feedback loops to give students guidance on their progress toward the learning goals. This feedback can be teacher-to-student, student-to-student, or self-assessment. Feedback is formative and provides students with the safety and security of knowing they can take risks and try new things without fear of failure.

A Deeper Learning Lesson

Here’s a lesson that shows how these elements come together, using “Thank You, Ma’am,” by Langston Hughes, a story with two characters: Roger, a teenage boy who wants a pair of shoes and attempts to steal the purse of Luella Jones. The learning goals for the lesson are to cite textual evidence, analyze how an author develops the points of view of different characters, and write arguments to support claims with logical reasoning and relevant evidence.

Have students think of Roger as a juvenile offender appearing in court for violation of probation, including charges of attempted theft, harassment, assault, and breaking curfew. Provide background knowledge about the court system and the procedures for a hearing—this can be done with videos, a video call with local officials, or even a trip to the local courthouse.

Assume that Roger is a student at a local school and lives in a group home for children with no families. Students will take on one of the following roles and determine whether to prosecute or defend Roger: a parole officer, a social worker, or a son or daughter of Luella Jones.

They will prepare a minimum three-paragraph opinion for the judge and jury, citing at least three pieces of evidence to support their position for or against Roger. There are multiple opportunities to explore themes of shame and forgiveness, as well as empathy, throughout this experience.

Students next collaborate with others in class who chose the same role, sharing their arguments and evidence. They then make revisions to their argument and practice presenting their opinion with their group, brainstorming possible questions that could be asked during a trial.

You should then run the trial a couple of times, so that every student participates at least once as either judge, jury, prosecution, or defense.

This plan for instruction makes the learning a memorable experience. Students are provided with an authentic problem that provides purpose and a context for learning.

Supervised Learning Vs. Unsupervised Learning – A Quick Guide For Beginners


“What’s the difference between supervised learning and unsupervised learning?”

This is an all too common question among beginners and newcomers in machine learning. The answer to this lies at the core of understanding the essence of machine learning algorithms. Without a clear distinction between these supervised learning and unsupervised learning, your journey simply cannot progress.

This is actually among the first things you should learn when you’re embarking on your machine learning journey. We cannot simply jump into the model building phase if we don’t understand where algorithms like linear regression, logistic regression, clustering, neural networks, etc. fall under.

If we don’t know what the objective of the machine learning algorithm is, we will fail in our endeavor to build an accurate model. This is where the idea of supervised learning and unsupervised learning comes in.

In this article, I will discuss these two concepts using examples and also answer the big question – how to decide when to use supervised learning or unsupervised learning?

If you prefer learning in video form, the below video explains 10 machine learning algorithms in a very easy-to-understand manner:

I have mentioned a few excellent resources below that are ideal to check out as a beginner in machine learning:

Let’s begin by taking a look at Supervised Learning.

In supervised learning, the computer is taught by example. It learns from past data and applies the learning to present data to predict future events. In this case, both input and desired output data provide help to the prediction of future events.

For accurate predictions, the input data is labeled or tagged as the right answer.

Supervised Machine Learning Categorisation

It is important to remember that all supervised learning algorithms are essentially complex algorithms, categorized as either classification or regression models.

1) Classification Models – Classification models are used for problems where the output variable can be categorized, such as “Yes” or “No”, or “Pass” or “Fail.” Classification Models are used to predict the category of the data. Real-life examples include spam detection, sentiment analysis, scorecard prediction of exams, etc.

2) Regression Models – Regression models are used for problems where the output variable is a real value such as a unique number, dollars, salary, weight or pressure, for example. It is most often used to predict numerical values based on previous data observations. Some of the more familiar regression algorithms include linear regression, logistic regression, polynomial regression, and ridge regression.

There are some very practical applications of supervised learning algorithms in real life, including:

Text categorization

Face Detection

Signature recognition

Customer discovery

Spam detection

Weather forecasting

Predicting housing prices based on the prevailing market price

Stock price predictions, among others

Unsupervised learning, on the other hand, is the method that trains machines to use data that is neither classified nor labeled. It means no training data can be provided and the machine is made to learn by itself. The machine must be able to classify the data without any prior information about the data.

The idea is to expose the machines to large volumes of varying data and allow it to learn from that data to provide insights that were previously unknown and to identify hidden patterns. As such, there aren’t necessarily defined outcomes from unsupervised learning algorithms. Rather, it determines what is different or interesting from the given dataset.

The machine needs to be programmed to learn by itself. The computer needs to understand and provide insights from both structured and unstructured data. Here’s an accurate illustration of unsupervised learning:

1) Clustering is one of the most common unsupervised learning methods. The method of clustering involves organizing unlabelled data into similar groups called clusters. Thus, a cluster is a collection of similar data items. The primary goal here is to find similarities in the data points and group similar data points into a cluster.

2) Anomaly detection is the method of identifying rare items, events or observations which differ significantly from the majority of the data. We generally look for anomalies or outliers in data because they are suspicious. Anomaly detection is often utilized in bank fraud and medical error detection.

Applications of Unsupervised Learning Algorithms

Some practical applications of unsupervised learning algorithms include:

Fraud detection

Malware detection

Conducting accurate basket analysis, etc.

When Should you Choose Supervised Learning vs. Unsupervised Learning?

In manufacturing, a large number of factors affect which machine learning approach is best for any given task. And, since every machine learning problem is different, deciding on which technique to use is a complex process.

In general, a good strategy for honing in on the right machine learning approach is to:

Evaluate the data. Is it labeled/unlabelled? Is there available expert knowledge to support additional labeling? This will help to determine whether a supervised, unsupervised, semi-supervised or reinforced learning approach should be used

Define the goal. Is the problem recurring, defined one? Or, will the algorithm be expected to predict new problems?

Review available algorithms that may suit the problem with regards to dimensionality (number of features, attributes or characteristics). Candidate algorithms should be suited to the overall volume of data and its structure

Study successful applications of the algorithm type on similar problems

End Notes

Supervised learning and unsupervised learning are key concepts in the field of machine learning. A proper understanding of the basics is very important before you jump into the pool of different machine learning algorithms.

As a next step, go ahead and check out the below article that covers the popular and core machine learning algorithms:


Playing Super Mario Bros With Deep Reinforcement Learning

This article was published as a part of the Data Science Blogathon


From this article, you will learn how to play Super Mario Bros with Deep Q-Network and Double Deep Q-Network (with code!).

Photo by Cláudio Luiz Castro on Unsplash

Super Mario Bros is a well-known video game title developed and published by Nintendo in the 1980s. It is one of the classical game titles that lived through the years and need no explanations. It is a 2D side-scrolling game, allowing the player to control the main character — Mario.

The game environment was taken from the OpenAI Gym using the Nintendo Entertainment System (NES) python emulator. In this article, I will show how to implement the Reinforcement Learning algorithm using Deep Q-Network (DQN) and Deep Double Q- Network (DDQN) algorithm using PyTorch library to examine each of their performance. The experiments conducted on each algorithm were then evaluated.

Data understanding and preprocessing

The original observation space for Super Mario Bros is 240 x 256 x 3 an RGB image. And the action space is 256 which means the agent is able to take 256 different possible actions. In order to speed up the training time of our model, we have used the gym’s wrapper function to apply certain transformations to the original environment:

Repeating each action of the agent over 4 frames and reducing the video frame size, i.e. each state in the environment is a 4 x 84 x 84 x 1 (a list of 4 continuous 84 x 84 grayscale pixel frames)

Normalizing the pixel value to the range from 0 to 1

Reducing the number of actions to 5 (Right only), 7 (Simple movement) and 12 (Complex movement)

Theoretical Results

Initially, I am thinking to perform an experiment using Q-learning which uses a 2-D array to store all possible combinations of state and action pair values. However, in this environment setting, I realized that it is impossible to apply Q-learning as there is a requirement to store a very large Q-table and this is not feasible.

Therefore, this project used the DQN algorithm as the baseline model. DQN algorithms use Q- learning to learn the best action to take in the given state and a deep neural network to estimate the Q- value function.

The type of deep neural network I used is a 3 layers convolutional neural network followed by two fully connected linear layers with a single output for each possible action. This network works like the Q-table in the Q-Learning algorithm. The objective loss function we used is Huber loss or smoothed mean absolute error on Q-values. Huber loss combines both MSE and MAE to minimize the objective function. The optimizer we used to optimize the objective function is Adam.

However, the DQN network has the problem of overestimation.

Figure 1: Illustration of how the DQN network is overestimated

There are 2 main reasons for overestimation as shown in Fig1. The first reason is due to the maximization function used to calculate the target value. Let’s assume the True action-values are denoted by: x(a₁) … x(aₙ). Noisy estimations made by DQN are denoted by Q(s,a₁;w), … Q(s, aₙ;w) Mathematically,

therefore it is an overestimation over true Q-value.

The second reason is that overestimated Q-values are again being used to update the weights of the Q-Network through backward propagation. This made overestimation more severe.

The main drawback of the overestimation is due to non-uniform overestimation done by DQN. The intuition is that the more frequent a specific state, action pairs appear in the replay buffer, the more overestimation is done on that state-action pairs.

To obtain a more accurate Q-value, we would like to use the DDQN network on our problem then compare the experiment result against the previous DQN network. To alleviate the overestimation caused by maximization, DDQN uses 2 Q-network, one for getting actions and another for updating the weights through backpropagation. The DDQN Q-learning update equation is:

Q* is the one for updating weight and Q^ is the one for getting actions for a specific state. Q^ simply copies the values of Q* every n step.

Experiment Results

There were 5 experiments conducted based on different movements of the agent using 2 algorithms, DQN and DDQN. The different movements are complex movements, simple movements, and right-only movements.

The parameters settings are as follows :

Observation space: 4 x 84 x 84 x 1

Action space: 12 (Complex Movement) or 7 (Simple Movement) or 5 (Right only movement)

Loss function: HuberLoss with δ = 1

Optimizer: Adam with lr = 0.00025

betas = (0.9, 0.999)

Batch size = 64 Dropout = 0.2

gamma = 0.9

Max memory size for experience replay = 30000

For epsilon greedy: Exploration decay = 0.99, Exploration min = 0.05

At the beginning of exploration, max = 1, the agent will take random action. After each episode, it will decay by the exploration decay rate until it reaches an exploration min of 0.05.

Experiment 1

The first experiment conducted was to compare DDQN and DQN algorithms for the complex movement of the agent.

Experiment 2 Experiment 3

From the above 3 experiment results, we can see that in all cases, DQN’s performance at episode 10,000 is approximately the same as DDQN’s performance at episode 2,000. So, we can conclude that the DDQN network helps to eliminate the overestimation problem caused by the DQN network.

A further experiment conducted using DDQN and DQN for the 3 different movements.

Experiment 4

The fourth experiment conducted was using the DDQN algorithm on all 3 different movements.

Experiment 5

From the above 2 experiment results, we can conclude that the network is able to train better on right-only movement action space which only allows the agent to move to the right.

Codes import torch import chúng tôi as nn import random from nes_py.wrappers import JoypadSpace import gym_super_mario_bros from tqdm import tqdm import pickle from gym_super_mario_bros.actions import RIGHT_ONLY, SIMPLE_MOVEMENT, COMPLEX_MOVEMENT import gym import numpy as np import collections import cv2 import matplotlib.pyplot as plt %matplotlib inline import time import pylab as pl from IPython import display class MaxAndSkipEnv(gym.Wrapper): """ Each action of the agent is repeated over skip frames return only every `skip`-th frame """ def __init__(self, env=None, skip=4): super(MaxAndSkipEnv, self).__init__(env) # most recent raw observations (for max pooling across time steps) self._obs_buffer = collections.deque(maxlen=2) self._skip = skip def step(self, action): total_reward = 0.0 done = None for _ in range(self._skip): obs, reward, done, info = self.env.step(action) self._obs_buffer.append(obs) total_reward += reward if done: break max_frame = np.max(np.stack(self._obs_buffer), axis=0) return max_frame, total_reward, done, info def reset(self): """Clear past frame buffer and init to first obs""" self._obs_buffer.clear() obs = self.env.reset() self._obs_buffer.append(obs) return obs class MarioRescale84x84(gym.ObservationWrapper): """ Downsamples/Rescales each frame to size 84x84 with greyscale """ def __init__(self, env=None): super(MarioRescale84x84, self).__init__(env) self.observation_space = gym.spaces.Box(low=0, high=255, shape=(84, 84, 1), dtype=np.uint8) def observation(self, obs): return MarioRescale84x84.process(obs) @staticmethod def process(frame): if chúng tôi == 240 * 256 * 3: img = np.reshape(frame, [240, 256, 3]).astype(np.float32) else: assert False, "Unknown resolution." # image normalization on RBG img = img[:, :, 0] * 0.299 + img[:, :, 1] * 0.587 + img[:, :, 2] * 0.114 resized_screen = cv2.resize(img, (84, 110), interpolation=cv2.INTER_AREA) x_t = resized_screen[18:102, :] x_t = np.reshape(x_t, [84, 84, 1]) return x_t.astype(np.uint8) class ImageToPyTorch(gym.ObservationWrapper): """ Each frame is converted to PyTorch tensors """ def __init__(self, env): super(ImageToPyTorch, self).__init__(env) old_shape = self.observation_space.shape self.observation_space = gym.spaces.Box(low=0.0, high=1.0, shape=(old_shape[-1], old_shape[0], old_shape[1]), dtype=np.float32) def observation(self, observation): return np.moveaxis(observation, 2, 0) class BufferWrapper(gym.ObservationWrapper): """ Only every k-th frame is collected by the buffer """ def __init__(self, env, n_steps, dtype=np.float32): super(BufferWrapper, self).__init__(env) self.dtype = dtype old_space = env.observation_space self.observation_space = gym.spaces.Box(old_space.low.repeat(n_steps, axis=0), old_space.high.repeat(n_steps, axis=0), dtype=dtype) def reset(self): self.buffer = np.zeros_like(self.observation_space.low, dtype=self.dtype) return self.observation(self.env.reset()) def observation(self, observation): self.buffer[:-1] = self.buffer[1:] self.buffer[-1] = observation return self.buffer class PixelNormalization(gym.ObservationWrapper): """ """ def observation(self, obs): return np.array(obs).astype(np.float32) / 255.0 def create_mario_env(env): env = MaxAndSkipEnv(env) env = MarioRescale84x84(env) env = ImageToPyTorch(env) env = BufferWrapper(env, 4) env = PixelNormalization(env) return JoypadSpace(env, SIMPLE_MOVEMENT) class DQNSolver(nn.Module): """ Convolutional Neural Net with 3 conv layers and two linear layers """ def __init__(self, input_shape, n_actions): super(DQNSolver, self).__init__() chúng tôi = nn.Sequential( nn.Conv2d(input_shape[0], 32, kernel_size=8, stride=4), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=3, stride=1), nn.ReLU() ) conv_out_size = self._get_conv_out(input_shape) chúng tôi = nn.Sequential( nn.Linear(conv_out_size, 512), nn.ReLU(), nn.Linear(512, n_actions) ) def _get_conv_out(self, shape): o = self.conv(torch.zeros(1, *shape)) return int( def forward(self, x): conv_out = self.conv(x).view(x.size()[0], -1) return self.fc(conv_out) class DQNAgent: def __init__(self, state_space, action_space, max_memory_size, batch_size, gamma, lr, dropout, exploration_max, exploration_min, exploration_decay, double_dqn, pretrained): # Define DQN Layers self.state_space = state_space self.action_space = action_space self.double_dqn = double_dqn self.pretrained = pretrained self.device = 'cuda' if torch.cuda.is_available() else 'cpu' # Double DQN network if self.double_dqn: self.local_net = DQNSolver(state_space, action_space).to(self.device) self.target_net = DQNSolver(state_space, action_space).to(self.device) if self.pretrained: self.local_net.load_state_dict(torch.load("", map_location=torch.device(self.device))) self.target_net.load_state_dict(torch.load("", map_location=torch.device(self.device))) self.optimizer = torch.optim.Adam(self.local_net.parameters(), lr=lr) chúng tôi = 5000 # Copy the local model weights into the target network every 5000 steps chúng tôi = 0 # DQN network else: chúng tôi = DQNSolver(state_space, action_space).to(self.device) if self.pretrained: self.dqn.load_state_dict(torch.load("", map_location=torch.device(self.device))) self.optimizer = torch.optim.Adam(self.dqn.parameters(), lr=lr) # Create memory self.max_memory_size = max_memory_size if self.pretrained: self.STATE_MEM = torch.load("") self.ACTION_MEM = torch.load("") self.REWARD_MEM = torch.load("") self.STATE2_MEM = torch.load("") self.DONE_MEM = torch.load("") with open("ending_position.pkl", 'rb') as f: self.ending_position = pickle.load(f) with open("num_in_queue.pkl", 'rb') as f: self.num_in_queue = pickle.load(f) else: self.STATE_MEM = torch.zeros(max_memory_size, *self.state_space) self.ACTION_MEM = torch.zeros(max_memory_size, 1) self.REWARD_MEM = torch.zeros(max_memory_size, 1) self.STATE2_MEM = torch.zeros(max_memory_size, *self.state_space) self.DONE_MEM = torch.zeros(max_memory_size, 1) self.ending_position = 0 self.num_in_queue = 0 self.memory_sample_size = batch_size # Learning parameters self.gamma = gamma self.l1 = nn.SmoothL1Loss().to(self.device) # Also known as Huber loss self.exploration_max = exploration_max self.exploration_rate = exploration_max self.exploration_min = exploration_min self.exploration_decay = exploration_decay def remember(self, state, action, reward, state2, done): """Store the experiences in a buffer to use later""" self.STATE_MEM[self.ending_position] = state.float() self.ACTION_MEM[self.ending_position] = action.float() self.REWARD_MEM[self.ending_position] = reward.float() self.STATE2_MEM[self.ending_position] = state2.float() self.DONE_MEM[self.ending_position] = done.float() self.ending_position = (self.ending_position + 1) % self.max_memory_size # FIFO tensor self.num_in_queue = min(self.num_in_queue + 1, self.max_memory_size) def batch_experiences(self): """Randomly sample 'batch size' experiences""" idx = random.choices(range(self.num_in_queue), k=self.memory_sample_size) STATE = self.STATE_MEM[idx] ACTION = self.ACTION_MEM[idx] REWARD = self.REWARD_MEM[idx] STATE2 = self.STATE2_MEM[idx] DONE = self.DONE_MEM[idx] return STATE, ACTION, REWARD, STATE2, DONE def act(self, state): """Epsilon-greedy action""" if self.double_dqn: chúng tôi += 1 if random.random() < self.exploration_rate: return torch.tensor([[random.randrange(self.action_space)]]) if self.double_dqn: # Local net is used for the policy return torch.argmax(self.local_net( else: return torch.argmax(self.dqn( def copy_model(self): """Copy local net weights into target net for DDQN network""" self.target_net.load_state_dict(self.local_net.state_dict()) def experience_replay(self): """Use the double Q-update or Q-update equations to update the network weights""" if self.double_dqn and chúng tôi % chúng tôi == 0: self.copy_model() return # Sample a batch of experiences STATE, ACTION, REWARD, STATE2, DONE = self.batch_experiences() STATE = ACTION = REWARD = STATE2 = DONE = self.optimizer.zero_grad() if self.double_dqn: # Double Q-Learning target is Q*(S, A) <- r + γ max_a Q_target(S', a) target = REWARD + torch.mul((self.gamma * self.target_net(STATE2).max(1).values.unsqueeze(1)), 1 - DONE) current = self.local_net(STATE).gather(1, ACTION.long()) # Local net approximation of Q-value else: # Q-Learning target is Q*(S, A) <- r + γ max_a Q(S', a) target = REWARD + torch.mul((self.gamma * self.dqn(STATE2).max(1).values.unsqueeze(1)), 1 - DONE) current = self.dqn(STATE).gather(1, ACTION.long()) loss = self.l1(current, target) loss.backward() # Compute gradients self.optimizer.step() # Backpropagate error self.exploration_rate *= self.exploration_decay # Makes sure that exploration rate is always at least 'exploration min' self.exploration_rate = max(self.exploration_rate, self.exploration_min) def show_state(env, ep=0, info=""): """While testing show the mario playing environment""" plt.figure(3) plt.clf() plt.imshow(env.render(mode='rgb_array')) plt.title("Episode: %d %s" % (ep, info)) plt.axis('off') display.clear_output(wait=True) display.display(plt.gcf()) def run(training_mode, pretrained, double_dqn, num_episodes=1000, exploration_max=1): env = gym_super_mario_bros.make('SuperMarioBros-1-1-v0') env = create_mario_env(env) # Wraps the environment so that frames are grayscale observation_space = env.observation_space.shape action_space = env.action_space.n agent = DQNAgent(state_space=observation_space, action_space=action_space, max_memory_size=30000, batch_size=32, gamma=0.90, lr=0.00025, dropout=0.2, exploration_max=1.0, exploration_min=0.02, exploration_decay=0.99, double_dqn=double_dqn, pretrained=pretrained) # Restart the enviroment for each episode num_episodes = num_episodes env.reset() total_rewards = [] if training_mode and pretrained: with open("total_rewards.pkl", 'rb') as f: total_rewards = pickle.load(f) for ep_num in tqdm(range(num_episodes)): state = env.reset() state = torch.Tensor([state]) total_reward = 0 steps = 0 while True: if not training_mode: show_state(env, ep_num) action = agent.act(state) steps += 1 state_next, reward, terminal, info = env.step(int(action[0])) total_reward += reward state_next = torch.Tensor([state_next]) reward = torch.tensor([reward]).unsqueeze(0) terminal = torch.tensor([int(terminal)]).unsqueeze(0) if training_mode: agent.remember(state, action, reward, state_next, terminal) agent.experience_replay() state = state_next if terminal: break total_rewards.append(total_reward) if ep_num != 0 and ep_num % 100 == 0: print("Episode {} score = {}, average score = {}".format(ep_num + 1, total_rewards[-1], np.mean(total_rewards))) num_episodes += 1 print("Episode {} score = {}, average score = {}".format(ep_num + 1, total_rewards[-1], np.mean(total_rewards))) # Save the trained memory so that we can continue from where we stop using 'pretrained' = True if training_mode: with open("ending_position.pkl", "wb") as f: pickle.dump(agent.ending_position, f) with open("num_in_queue.pkl", "wb") as f: pickle.dump(agent.num_in_queue, f) with open("total_rewards.pkl", "wb") as f: pickle.dump(total_rewards, f) if agent.double_dqn:, ""), "") else:, ""), ""), ""), ""), ""), "") env.close() # For training run(training_mode=True, pretrained=False, double_dqn=True, num_episodes=1, exploration_max = 1) # For Testing run(training_mode=False, pretrained=True, double_dqn=True, num_episodes=1, exploration_max = 0.05)



DDQN takes much less episode to train in comparison with DQN. Thus, the DDQN network helps to eliminate overestimation issues found in the DQN network. Both DQN and DDQN networks are able to train better on right-only movement compared to simple & complex movement action space.

About the Author

Connect with me on LinkedIn Here.

Thanks for giving your time!

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.


Discover The Applications Of Deep Learning In Healthcare

The applications of DL in healthcare is many, from improved diagnostics to enhance disease prediction

Deep learning, a subset of artificial intelligence (AI), has emerged as a revolutionary technology with profound applications in the healthcare industry. In healthcare, deep learning algorithms can be trained to analyze medical images, such as X-rays, MRIs, and CT scans, with remarkable accuracy, aiding in the early detection of diseases and improving diagnostic outcomes. These deep learning algorithms have also shown promise in drug discovery, genomics research, and clinical decision support systems. By harnessing the power of deep learning, healthcare providers can revolutionize patient care, improve efficiency, and ultimately save lives. This article will explore the diverse applications of deep learning in healthcare and their potential impact.

Deep learning, a subset of artificial intelligence (AI), has emerged as a powerful tool with transformative potential across various industries, including healthcare. With its ability to analyze vast amounts of complex data, deep learning holds promise for revolutionizing healthcare by improving diagnostics, personalized medicine, drug discovery, disease prediction, and more.

1. Medical Imaging

Deep learning algorithms have demonstrated remarkable capabilities in interpreting medical images such as X-rays, CT scans, MRI scans, and mammograms. By training on large datasets, deep learning models can accurately identify patterns, anomalies, and early signs of diseases. This technology can assist radiologists in diagnosing conditions like cancer, cardiovascular diseases, and neurological disorders, leading to faster and more accurate diagnoses.

2. Disease Diagnosis and Prediction

Deep learning models can aid in diagnosing diseases by analyzing patient data, including medical records, symptoms, genetic information, and laboratory results. By leveraging this information, deep learning algorithms can identify disease patterns and provide more accurate and timely diagnoses. Additionally, these models can predict the likelihood of developing certain diseases based on risk factors, allowing for early intervention and prevention.

3. Drug Discovery and Development

The process of discovering and developing new drugs is time-consuming and costly. Deep learning can accelerate this process by analyzing vast amounts of biomedical data, including molecular structures, genomic data, and clinical trial results. By predicting the efficacy and safety of potential drug candidates, deep learning can help researchers identify promising molecules, optimize drug design, and reduce the time and cost associated with bringing new drugs to market.

4. Personalized Medicine 5. Electronic Health Records (HER) Analysis

Deep learning can unlock valuable insights from electronic health records containing vast patient data. By analyzing this data, deep learning models can identify patterns, predict disease progression, and enable early intervention. Moreover, deep learning algorithms can improve HER accuracy by automatically extracting relevant information, reducing errors, and enhancing healthcare providers’ ability to make informed decisions.

6. Clinical Decision Support Systems

Deep learning can be integrated into clinical decision support systems to aid healthcare professionals in making informed decisions. By analyzing patient data, medical literature, and treatment guidelines, deep learning models can provide recommendations on diagnosis, treatment plans, and medication options. These systems can enhance clinical decision-making, improve patient safety, and reduce medical errors.

7. Disease Outbreak Prediction

Deep learning can analyze vast amounts of data, including social media feeds, news articles, and sensor data, to detect early signs of disease outbreaks. By identifying patterns and correlations, deep learning algorithms can predict disease spread, helping public health authorities allocate resources, implement preventive measures, and mitigate the impact of epidemics.

Update the detailed information about Emerging Of Traditional Learning With Digital Learning on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!