Trending February 2024 # Introduction To Saliency Map In An Image With Tensorflow 2.X Api # Suggested March 2024 # Top 8 Popular

You are reading the article Introduction To Saliency Map In An Image With Tensorflow 2.X Api updated in February 2024 on the website Eastwest.edu.vn. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Introduction To Saliency Map In An Image With Tensorflow 2.X Api

This article was published as a part of the Data Science Blogathon.

Introduction to Tensorflow 2.x

Suppose we want to focus on a particular part of an image than what concept will help us. Yes, it is a saliency map. It mainly focuses on specific pixels of images while ignoring others.

Saliency Map

The saliency map of an image represents the most prominent and focused pixel of an image. Sometimes, the brighter pixels of an image tell us about the salient of the pixels. This means the brightness of the pixel is directly proportional to the saliency of an image.

Suppose we want to give attention to a particular part of an image, like wanting to focus on the image of a bird rather than the other parts like the sky, nest etc. Then by computing the saliency map, we will achieve this. It will crucially assist in reducing the computational cost. It is usually a grayscale image but can be converted into another format of a coloured image depending upon our visual comfortability. Saliency maps are also termed “heat maps” since the hotness/brightness of the image has an impactful effect on identifying the class of the object. The saliency map aims to determine the areas in the fovea that are salient or observable at every place and to influence the decision of attentive regions based on the spatial pattern of saliency. It is employed in a variety of Visual Attention models. The “ITTI and Koch” Computational Framework of Visual Attention is built on the notion of a saliency map.

How to Compute Saliency Map with Tensorflow?

The saliency map can be calculated by taking the derivatives of the class probability Pk with respect to the input image X.

saliency_map = dpk/dX

Hang on a minute! That seems quite familiar! Yes, it’s the same backpropagation which we use for training the model. We only need to take one more step: the gradient does not stop at the first layer of our network. Rather, we must return it to the input image X.

As a result, saliency maps provide a suitable characterization for each input pixel in accordance with a specific class prediction Pi. Pixels that are significant for flower prediction should cluster around flower pixels. Otherwise, something really strange is going on with the trained model.

Different Types of the Saliency Map

1. Static Saliency: It focuses on every static pixel of the image to calculate the important area of interest for saliency map analysis.

2. Motion Saliency: It focuses on the dynamic features of video data. The saliency map in a video is computed by calculating the optical flow of a video. The moving entities/objects are considered salient objects.

Code

We will perform step-by-step investigations of the ResNet50 architecture, which has been pre-trained on ImageNet. But you can take other pretrained deep learning models or bring your own trained model. We will illustrate how to develop a basic saliency map utilizing the most famous DL model in TensorFlow 2.x. We used the Wikimedia image as a test image during the tutorials.

We begin by creating a ResNet50 with ImageNet weights. With the simple helper functions, we import the image on the disc and prepare it for feeding to the ResNet50.

# Import necessary packages import tensorflow as tf import numpy as np import matplotlib.pyplot as plt def input_img(path): image = tf.image.decode_png(tf.io.read_file(path)) image = tf.expand_dims(image, axis=0) image = tf.cast(image, tf.float32) image = tf.image.resize(image, [224,224]) return image def normalize_image(img): def get_image(): import urllib.request filename = 'image.jpg' urll )

fig1: Input_image

test_model = tf.keras.applications.resnet50.ResNet50() #test_model.summary() get_image() img_path = "image.jpg" input_img = input_img(img_path) input_img = tf.keras.applications.densenet.preprocess_input(input_img) plt.imshow(normalize_image(input_img[0]), cmap = "ocean") result = test_model(input_img) max_idx = tf.argmax(result,axis = 1) tf.keras.applications.imagenet_utils.decode_predictions(result.numpy())

with tf.GradientTape() as tape: tape.watch(input_img) result = test_model(input_img) max_score = result[0,max_idx[0]]

fig2: (1) Saliency_map, (2) input_image, (3) overlayed_image

Con

In this blog, we have defined the saliency map in different aspects. We have added a pictorial representation to understand the term “saliency map” deeply. Also, we have understood it by implementing it in python using TensorFlow API. The results seem promising and can be easily understandable.

In this article, you have learned about the saliency map for an image with tensorflow.

A python code is also implemented on an image t

The mathematical background of the saliency map is also covered in this article.

That’s it! Congratulations, you just computed a saliency map.

github_repo.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

You're reading Introduction To Saliency Map In An Image With Tensorflow 2.X Api

Tensorflow Functional Api: Building A Cnn

This article was published as a part of the Data Science Blogathon

Introduction

In today’s article, I will talk about developing a convolutional neural network employing TensorFlow Functional API. It will dispense the capability of functional API, which allows us to produce hybrid model architecture surpassing the ability of a primary sequential model.

Photo by Rita Morais from Unsplash

About: TensorFlow

TensorFlow is a popular library, something you perpetually hear probably in Deep Learning and Artificial Intelligence society. There are numerous open-source packages and projects for deep learning.

TensorFlow, an open-source artificial intelligence library managing data flow graphs, is the most prevalent deep-learning library. It is used to generate large-scale neural networks with countless layers.

TensorFlow is practiced for deep learning or machine learning predicaments such as Classification, Perception, Perception, Discovery, forecast, and Production.

So, when we interpret a classification problem, we apply a convolutional neural network model. Still, most developers were intimate with modeling sequential models. The layers accompany each other one by one.

The sequential API empowers you to design models layer-by-layer for most significant problems.

The difficulty is restricted in that it does not allow you to produce models that share layers or have added inputs or outputs.

Because of this, we can practice Tensorflows Functional API as Multi-Output Model.

Functional API (tf.Keras)

The functional API in tf.Keras is an alternative way of building more flexible models, including formulating a further complex model.

For example, when implementing an insignificantly more complicated example with machine learning, you may rarely face the state when you demand added models for the same data.

So we would need to produce two outputs. The most manageable option would be to build two separate models based on the corresponding data to make predictions.

This would be gentle, but what if, in a present scenario, we need to have 50 outputs. It could be a discomfort to maintain all those separate models.

Alternatively, it is more fruitful to construct a single model with increased outcomes.

In the open API method, models are determined by forming layers and correlating them straight to each other in sets, then establishing a Model that defines the layers to function as the input and output.

What is different in Sequential API?

Sequential API enables you to generate models layer-by-layer for most top queries. It is regulated because it does not allow you to design models that share layers or have added inputs or outputs.

Let us understand how to create an object of sequential API model below:

model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation=’relu’), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=’softmax’) ])

In functional API, you can design models that produce a lot more versatility. You can undoubtedly fix models where layers relate to more than just the preceding and succeeding layers.

You can combine layers with several other layers. As a consequence, producing heterogeneous networks such as siamese networks and residual networks becomes feasible.

Let’s begin to develop a CNN model practicing a Functional API

In this post, we utilize the MNIST dataset to build the convolutional neural network for image classification. The MNIST database comprises 60,000 training images and 10,000 testing images secured from American Census Bureau workers and American high school juniors.

# import libraries import numpy as np import tensorflow as tf from tensorflow.keras.layers import Dense, Dropout, Input from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten from tensorflow.keras.models import Model from tensorflow.keras.datasets import mnist # load data (x_train, y_train), (x_test, y_test) = mnist.load_data() # convert sparse label to categorical values num_labels = len(np.unique(y_train)) y_train = to_categorical(y_train) y_test = to_categorical(y_test) # preprocess the input images image_size = x_train.shape[1] x_train = np.reshape(x_train,[-1, image_size, image_size, 1]) x_test = np.reshape(x_test,[-1, image_size, image_size, 1]) x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255

In the code above,

I distributed these two groups as train and test and distributed the labels and the inputs.

Independent variables (x_train and x_test) hold greyscale RGB codes of 0 to 255, whereas dependent variables (y_train and y_test) carry labels of 0 to 9, describing which number they genuinely are.

It is a good practice to normalize our data as it is constantly required in deep learning models. We can accomplish this by dividing the RGB codes by 255.

Next, we initialize parameters for the networks.

# parameters for the network input_shape = (image_size, image_size, 1) batch_size = 128 kernel_size = 3 filters = 64 dropout = 0.3

In the code above,

input_shape: variable represents a need to plan and style a standalone Input layer that designates input data. The input layer accepts a shape argument that is a tuple that describes the dimensions of the input data.

batch_size: is a hyperparameter that determines the number of samples to run through before refreshing the internal model parameters.

kernel_size: relates to the dimensions (height x width) of the filter mask. Convolutional neural networks (CNN) are essentially a pile of layers marked by various filters’ operations on the input. Those filters are ordinarily called kernels.

filter: is expressed by a vector of weights among which we convolve the input.

Dropout: is a process where randomly picked neurons are neglected throughout training. This implies that their participation in the activation of downstream neurons is temporally dismissed on the front pass.

Let us define a simplistic Multilayer Perceptron, a convolutional neural network:

# utiliaing functional API to build cnn layers inputs = Input(shape=input_shape) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(inputs) y = MaxPooling2D()(y) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(y) y = MaxPooling2D()(y) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(y) # convert image to vector y = Flatten()(y) # dropout regularization y = Dropout(dropout)(y) outputs = Dense(num_labels, activation='softmax')(y) # model building by supplying inputs/outputs model = Model(inputs=inputs, outputs=outputs) In the code above,

We specify a multilayer Perceptron model toward binary classification.

The model holds an input layer, 3 hidden layers beside 64 neurons, and a product layer with 1 output.

Rectified linear activation functions are applied in all hidden layers, and a softmax activation function is adopted in the product layer for binary classification.

And you can observe the layers in the model are correlated pairwise. This is achieved by stipulating where the input comes from while determining each new layer.

As with every Sequential API, the model is the information we can summarize, fit, evaluate, and apply to execute predictions.

TensorFlow presents a Model class that you can practice to generate a model from your developed layers. It demands that you only define the input and output layers—mapping the structure and model graph of the network architecture.

Lastly, we train the model.

optimizer=’adam’, metrics=[‘accuracy’]) model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=20, batch_size=batch_size) # accuracy evaluation score = model.evaluate(x_test, y_test, batch_size=batch_size, verbose=0) print(“nTest accuracy: %.1f%%” % (100.0 * score[1]))

Now we have successfully developed a convolutional neural network to distinguish handwritten digits with Tensorflow’s Functional API. We have obtained an accuracy of above 99%, and we can save the model & design a digit-classifier web application.

References:

The media shown in this article on Sign Language Recognition are not owned by Analytics Vidhya and are used at the Author’s discretion.

Related

How To Unify Colors In An Image With Photoshop

How To Unify Colors In An Image With Photoshop

Written by Steve Patterson.

In this tutorial, we’ll learn how to unify colors in an image with Photoshop. As photographers, artists and designers, color is one of the most powerful tools we have for conveying the message, mood or theme of an image. But like all good things, too much of it can be bad. In photography, it’s all too easy to capture too many colors in the scene, distracting the viewer’s eye and lessening the overall impact of the image.

Of course, we can always try to control or minimize colors before we take the shot. But that’s not always possible or practical. What we need, then, is a way to unify the colors in the image afterwards. By “unify the colors”, I mean taking colors that are very different from each other and making them more similar.

How do we do that? As we’ll learn in this tutorial, it’s actually very easy, especially with Photoshop. All we need to do is choose a single color to use for the overall theme of the image, and then mix, or blend, that color in with the photo’s original colors. Let’s see how it works!

I’m using Photoshop CC but everything we’ll be learning is fully compatible with Photoshop CS6 and earlier, so everyone can follow along. You can get the latest Photoshop version here.

Let’s get started!

Why Do We Need To Unify Colors? Too Many Colors

First, let’s look at a simplified version of the problem and the solution. When we’re done, we’ll take what we’ve learned and apply it to an actual photo. Here’s a quick design I made in Photoshop using six shapes, each filled with a different color. Along the top, we have red, yellow and green, and on the bottom, we have cyan, blue and magenta:

Six shapes, each adding a different color to the image.

If I was designing something for, say, a child’s birthday party, this might work. But in most cases, I think you would agree that there are too many different colors in this image. In terms of color theory, we would say that there are too many different hues, with “hue” being what most people think of as the actual color itself (as opposed to the saturation or lightness of the color).

So if there are too many colors, what can we do about it? Well, we could always convert the image to black and white which would certainly solve the problem. Or, we could unify the colors so they look more similar to each other. How do we do that? We do it by either choosing one of the existing colors in the image, or choosing a completely different color, and then mixing that color in with the others.

Choosing A Unifying Color

If we look in my Layers panel, we see the image sitting on the Background layer (I’ve flattened the layers here just to keep things simple):

The Layers panel showing the image on the Background layer.

Then, I’ll choose Solid Color from the top of the list:

Choosing a Solid Color fill layer.

Photoshop will pop open its Color Picker where we can choose the color we want to use. The color you need may depend on the mood you’re trying to convey or on the theme of a larger, overall design. For this example, I’ll choose a shade of orange:

Choosing a color from the Color Picker.

Photoshop fills the document with the color.

The reason that the color is blocking the image is because, if we look in the Layers panel, we see that Photoshop has placed my Solid Color fill layer, named “Color Fill 1”, above the image on the Background layer. Any layer that sits above another layer in the Layers panel appears in front of that layer in the document:

The Layers panel showing the fill layer above the Background layer.

Related: Understanding Layers in Photoshop

Mixing The Colors – The “Color” Blend Mode

Changing the blend mode of the fill layer to Color.

By changing the blend mode to Color, we allow our Solid Color fill layer to affect only the colors in the image below it. It no longer has any effect on the tonal values (the brightness) of the image.

If we look at my document after changing the blend mode to Color, we see that my shapes are now once again visible. But, rather than appearing with their original colors, they now appear as different shades of the same color (the color I chose in the Color Picker):

The shapes re-appear, but now they’re all the same hue.

Mixing The Colors – Layer Opacity

We’re on the right track, but since our goal here is to make the colors more similar, not make them all the same hue, I still need a way to mix the color from the fill layer in with the original colors of the shapes. To do that, all I need to do is adjust the opacity of the fill layer. You’ll find the Opacity option in the upper right of the Layers panel, directly across from the Blend Mode option.

Opacity controls the transparency of the layer. By default, the opacity value is set to 100%, which means that the layer is 100% visible. Lowering the opacity value makes the layer more transparent, allowing the layer(s) below it to partially show through. If we lower the opacity of our Solid Color fill layer, we’ll allow the colors from the original image to show through the fill layer’s color, effectively mixing the colors from both layers together!

To show you what I mean, I’m actually going to start by lowering my opacity value all the way down to 0%:

Lowering the opacity of the fill layer to 0%.

At 0% opacity, the fill layer becomes 100% transparent, and we’re back to seeing the shapes in their original colors, completely unaffected by the fill layer:

The result with the Solid Color fill layer’s opacity set to 0%.

Watch what happens, though, as I start to increase the fill layer’s opacity. I’ll start by increasing it to 25%:

Increasing the fill layer’s opacity to 25%.

By increasing the opacity to 25%, I’m telling Photoshop to mix 25% of the fill layer’s color with 75% of the original colors, and here’s the result. Since each shape now has some of the orange from the fill layer mixed into it, the orange is unifying their colors so they no longer look quite so different. The effect is subtle at the moment, but even so, we can already see that they’re becoming more similar:

The result with the fill layer’s opacity set to 25%.

If I increase the opacity of the fill layer to 50%:

Increasing the fill layer’s opacity to 50%.

I’m now mixing 50% of the fill layer’s color in with 50% of the original colors, and now the shapes are looking even more similar:

The result with the fill layer’s opacity set to 50%.

And, if I increase the fill layer’s opacity to 75%:

Increasing the fill layer’s opacity to 75%.

Photoshop is now mixing 75% of the fill layer’s color with only 25% of the original colors, creating a very strong color theme:

The result with the fill layer’s opacity set to 75%.

Changing The Unifying Color

Photoshop re-opens the Color Picker, allowing me to choose a different color. This time, I’ll choose a pinkish purple:

Choosing a new color from the Color Picker.

The result after changing the fill color.

At the moment, I still have the opacity of my fill layer set to 75%. If the effect is too strong, all I need to do is lower the opacity. I’ll lower it down to 50%:

Lowering the fill layer’s opacity to 50%.

And now, the shapes are still being unified by the new color, but the effect is more subtle:

The result after lowering the fill layer’s opacity.

How To Unify Colors In An Image

And that’s really all there is to it! So now that we’ve looked at the basic theory behind unifying colors with Photoshop, let’s take what we’ve learned and apply it to an actual photo. You can use any photo you like. I’ll use this one since it contains lots of different colors (colorful umbrellas photo from Adobe Stock:

The original image. Photo credit: Adobe Stock.

Step 1: Add A Solid Color Fill Layer

Then, we’ll choose Solid Color from the top of the list:

Choosing a Solid Color fill layer.

Step 2: Choose Your Color

Choose your color from the Color Picker.

Step 3: Change The Fill Layer’s Blend Mode To “Color”

Next, back in the Layers panel, change the blend mode of the Solid Color fill layer from Normal to Color:

Changing the blend mode of the fill layer to Color.

Your image will re-appear, but at the moment, it’s completely colorized by the fill layer:

The image after changing the blend mode to Color.

Step 4: Lower The Fill Layer’s Opacity

To mix the fill layer’s color in with the original colors of the image, simply lower the fill layer’s opacity. The exact value you need will depend on your image, so keep an eye on it as you adjust the opacity until you’re happy with the results. For this image, I’ll lower the opacity to 25%:

Lower the opacity to blend the colors together.

This mixes 25% of the fill layer with 75% of the original image, unifying the colors nicely:

The result after lowering the fill layer’s opacity.

Before And After

To make the difference with my image easier to see, here’s a split-view comparison showing the original colors on the left and the unified colors on the right:

The original (left) and unified (right) colors.

Sampling A Unifying Color From The Image

Finally, let’s look at how to choose a unifying color directly from the image itself. So far, we’ve been choosing colors from the Color Picker. But let’s say I want to choose a color from one of the umbrellas. To do that, the first thing I’ll do is lower the opacity of my fill layer all the way down to 0%. This will make the fill layer completely transparent for a moment so I’m seeing the original colors in the image:

To choose a color from the image, first lower the fill layer’s opacity to 0%.

The sampled color appears in the Color Picker.

Increasing the fill layer’s opacity to 20%.

And here’s the result. Just as we saw earlier, I was able to instantly change the color theme of the image simply by changing the color of my fill layer, and then adjusting the opacity as needed:

The final result.

And there we have it! That’s how to easily unify colors in an image using nothing more than a Solid Color fill layer, the Color blend mode, and the layer opacity option, in Photoshop! Check out our Photo Retouching section for more image editing tutorials. And don’t forget, all of our Photoshop tutorials are now available for download as PDFs!

Linear Regression Tutorial With Tensorflow

What is Linear Regression?

Linear Regression is an approach in statistics for modelling relationships between two variables. This modelling is done between a scalar response and one or more explanatory variables. The relationship with one explanatory variable is called simple linear regression and for more than one explanatory variables, it is called multiple linear regression.

TensorFlow provides tools to have full control of the computations. This is done with the low-level API. On top of that, TensorFlow is equipped with a vast array of APIs to perform many machine learning algorithms. This is the high-level API. TensorFlow calls them estimators

Low-level API: Build the architecture, optimization of the model from scratch. It is complicated for a beginner

High-level API: Define the algorithm. It is easer-friendly. TensorFlow provides a toolbox called estimator to construct, train, evaluate and make a prediction.

In this tutorial, you will use the estimators only. The computations are faster and are easier to implement. The first part of the tutorial explains how to use the gradient descent optimizer to train a Linear regression in TensorFlow. In a second part, you will use the Boston dataset to predict the price of a house using TensorFlow estimator.

Download Boston DataSet

In this TensorFlow Regression tutorial, you will learn:

How to train a linear regression model

Before we begin to train the model, let’s have a look at what is a linear regression.

Imagine you have two variables, x and y and your task is to predict the value of knowing the value of . If you plot the data, you can see a positive relationship between your independent variable, x and your dependent variable y.

You may observe, if x=1,y will roughly be equal to 6 and if x=2,y will be around 8.5.

This is not a very accurate method and prone to error, especially with a dataset with hundreds of thousands of points.

A linear regression is evaluated with an equation. The variable y is explained by one or many covariates. In your example, there is only one dependent variable. If you have to write this equation, it will be:

With:

is the bias. i.e. if x=0, y=

is the weight associated to x

is the residual or the error of the model. It includes what the model cannot learn from the data

Imagine you fit the model and you find the following solution for:

= 3.8

= 2.78

You can substitute those numbers in the equation and it becomes:

y= 3.8 + 2.78x

You have now a better way to find the values for y. That is, you can replace x with any value you want to predict y. In the image below, we have replace x in the equation with all the values in the dataset and plot the result.

The red line represents the fitted value, that is the values of y for each value of x. You don’t need to see the value of x to predict y, for each x there is any which belongs to the red line. You can also predict for values of x higher than 2!

If you want to extend the linear regression to more covariates, you can by adding more variables to the model. The difference between traditional analysis and linear regression is the linear regression looks at how y will react for each variable x taken independently.

Let’s see an example. Imagine you want to predict the sales of an ice cream shop. The dataset contains different information such as the weather (i.e rainy, sunny, cloudy), customer informations (i.e salary, gender, marital status).

Traditional analysis will try to predict the sale by let’s say computing the average for each variable and try to estimate the sale for different scenarios. It will lead to poor predictions and restrict the analysis to the chosen scenario.

If you use linear regression, you can write this equation:

The algorithm will find the best solution for the weights; it means it will try to minimize the cost (the difference between the fitted line and the data points).

How the algorithm works

The algorithm will choose a random number for each

and and replace the value of x to get the predicted value of y. If the dataset has 100 observations, the algorithm computes 100 predicted values.

andand replace the value of x to get the predicted value of y. If the dataset has 100 observations, the algorithm computes 100 predicted values.

We can compute the error, noted

of the model, which is the difference between the predicted value and the real value. A positive error means the model underestimates the prediction of y, and a negative error means the model overestimates the prediction of y.

Your goal is to minimize the square of the error. The algorithm computes the mean of the square error. This step is called minimization of the error. For linear regression is the Mean Square Error, also called MSE. Mathematically, it is:

Where:

is the weights so refers to the predicted value

y is the real values

m is the number of observations

Note that

means it uses the transpose of the matrices. The is the mathematical notation of the mean.

means it uses the transpose of the matrices. Theis the mathematical notation of the mean.

The goal is to find the best

that minimize the MSE

that minimize the MSE

If the average error is large, it means the model performs poorly and the weights are not chosen properly. To correct the weights, you need to use an optimizer. The traditional optimizer is called Gradient Descent.

The gradient descent takes the derivative and decreases or increases the weight. If the derivative is positive, the weight is decreased. If the derivative is negative, the weight increases. The model will update the weights and recompute the error. This process is repeated until the error does not change anymore. Each process is called an iteration. Besides, the gradients are multiplied by a learning rate. It indicates the speed of the learning.

If the learning rate is too small, it will take very long time for the algorithm to converge (i.e requires lots of iterations). If the learning rate is too high, the algorithm might never converge.

You can see from the picture above, the model repeats the process about 20 times before to find a stable value for the weights, therefore reaching the lowest error.

Note that, the error is not equal to zero but stabilizes around 5. It means, the model makes a typical error of 5. If you want to reduce the error, you need to add more information to the model such as more variables or use different estimators.

You remember the first equation

The final weights are 3.8 and 2.78. The video below shows you how the gradient descent optimize the loss function to find this weights

How to train a Linear Regression with TensorFlow

Now that you have a better understanding of what is happening behind the hood, you are ready to use the estimator API provided by TensorFlow to train your first linear regression using TensorFlow.

You will use the Boston Dataset, which includes the following variables

crim per capita crime rate by town

zn proportion of residential land zoned for lots over 25,000 sq.ft.

indus proportion of non-retail business acres per town.

nox nitric oxides concentration

rm average number of rooms per dwelling

age proportion of owner-occupied units built before 1940

dis weighted distances to five Boston employment centers

tax full-value property-tax rate per dollars 10,000

ptratio pupil-teacher ratio by town

medv Median value of owner-occupied homes in thousand dollars

You will create three different datasets:

dataset objective shape

Training Train the model and obtain the weights 400, 10

Evaluation Evaluate the performance of the model on unseen data 100, 10

Predict Use the model to predict house value on new data 6, 10

The objectives is to use the features of the dataset to predict the value of the house.

During the second part of the tutorial, you will learn how to use TensorFlow with three different way to import the data:

With Pandas

With Numpy

Only TF

Note that, all options provide the same results.

You will learn how to use the high-level API to build, train an evaluate a TensorFlow linear regression model. If you were using the low-level API, you had to define by hand the:

Loss function

Optimize: Gradient descent

Matrices multiplication

Graph and tensor

This is tedious and more complicated for beginner.

Pandas

You need to import the necessary libraries to train the model.

import pandas as pd from sklearn import datasets import tensorflow as tf import itertools

Step 1) Import the data with panda.

You define the column names and store it in COLUMNS. You can use pd.read_csv() to import the data.

COLUMNS = ["crim", "zn", "indus", "nox", "rm", "age", "dis", "tax", "ptratio", "medv"]

training_set = pd.read_csv(“E:/boston_train.csv”, skipinitialspace=True,skiprows=1, names=COLUMNS)

test_set = pd.read_csv(“E:/boston_test.csv”, skipinitialspace=True,skiprows=1, names=COLUMNS)

prediction_set = pd.read_csv(“E:/boston_predict.csv”, skipinitialspace=True,skiprows=1, names=COLUMNS)

You can print the shape of the data.

print(training_set.shape, test_set.shape, prediction_set.shape) Output (400, 10) (100, 10) (6, 10)

Note that the label, i.e. your y, is included in the dataset. So you need to define two other lists. One containing only the features and one with the name of the label only. These two lists will tell your estimator what are the features in the dataset and what column name is the label

It is done with the code below.

FEATURES = ["crim", "zn", "indus", "nox", "rm", "age", "dis", "tax", "ptratio"] LABEL = "medv"

Step 2) Convert the data

You need to convert the numeric variables in the proper format. Tensorflow provides a method to convert continuous variable: tf.feature_column.numeric_column().

In the previous step, you define a list a feature you want to include in the model. Now you can use this list to convert them into numeric data. If you want to exclude features in your model, feel free to drop one or more variables in the list FEATURES before you construct the feature_cols

Note that you will use Python list comprehension with the list FEATURES to create a new list named feature_cols. It helps you avoid writing nine times tf.feature_column.numeric_column(). A list comprehension is a faster and cleaner way to create new lists

feature_cols = [tf.feature_column.numeric_column(k) for k in FEATURES]

Step 3) Define the estimator

In this step, you need to define the estimator. Tensorflow currently provides 6 pre-built estimators, including 3 for classification task and 3 for TensorFlow regression task:

Regressor

DNNRegressor

LinearRegressor

DNNLineaCombinedRegressor

Classifier

DNNClassifier

LinearClassifier

DNNLineaCombinedClassifier

In this tutorial, you will use the Linear Regressor. To access this function, you need to use tf.estimator.

The function needs two arguments:

feature_columns: Contains the variables to include in the model

model_dir: path to store the graph, save the model parameters, etc

Tensorflow will automatically create a file named train in your working directory. You need to use this path to access the Tensorboard as shown in the below TensorFlow regression example.

estimator = tf.estimator.LinearRegressor( feature_columns=feature_cols, model_dir="train") Output INFO:tensorflow:Using default config.

The tricky part with TensorFlow is the way to feed the model. Tensorflow is designed to work with parallel computing and very large dataset. Due to the limitation of the machine resources, it is impossible to feed the model with all the data at once. For that, you need to feed a batch of data each time. Note that, we are talking about huge dataset with millions or more records. If you don’t add batch, you will end up with a memory error.

For instance, if your data contains 100 observations and you define a batch size of 10, it means the model will see 10 observations for each iteration (10*10).

When the model has seen all the data, it finishes one epoch. An epoch defines how many times you want the model to see the data. It is better to set this step to none and let the model performs iteration number of time.

A second information to add is if you want to shuffle the data before each iteration. During the training, it is important to shuffle the data so that the model does not learn specific pattern of the dataset. If the model learns the details of the underlying pattern of the data, it will have difficulties to generalize the prediction for unseen data. This is called overfitting. The model performs well on the training data but cannot predict correctly for unseen data.

TensorFlow makes this two steps easy to do. When the data goes to the pipeline, it knows how many observations it needs (batch) and if it has to shuffle the data.

To instruct Tensorflow how to feed the model, you can use pandas_input_fn. This object needs 5 parameters:

x: feature data

y: label data

batch_size: batch. By default 128

num_epoch: Number of epoch, by default 1

shuffle: Shuffle or not the data. By default, None

You need to feed the model many times so you define a function to repeat this process. all this function get_input_fn.

def get_input_fn(data_set, num_epochs=None, n_batch = 128, shuffle=True): return tf.estimator.inputs.pandas_input_fn( x=pd.DataFrame({k: data_set[k].values for k in FEATURES}), y = pd.Series(data_set[LABEL].values), batch_size=n_batch, num_epochs=num_epochs, shuffle=shuffle)

The usual method to evaluate the performance of a model is to:

Train the model

Evaluate the model in a different dataset

Make prediction

Tensorflow estimator provides three different functions to carry out this three steps easily.

Step 4): Train the model

You can use the estimator train to evaluate the model. The train estimator needs an input_fn and a number of steps. You can use the function you created above to feed the model. Then, you instruct the model to iterate 1000 times. Note that, you don’t specify the number of epochs, you let the model iterates 1000 times. If you set the number of epoch to 1, then the model will iterate 4 times: There are 400 records in the training set, and the batch size is 128

128 rows

128 rows

128 rows

16 rows

Therefore, it is easier to set the number of epoch to none and define the number of iteration as shown in th below TensorFlow classification example.

estimator.train(input_fn=get_input_fn(training_set, num_epochs=None, n_batch = 128, shuffle=False), steps=1000) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 1 into train/model.ckpt. INFO:tensorflow:loss = 83729.64, step = 1 INFO:tensorflow:global_step/sec: 238.616 INFO:tensorflow:loss = 13909.657, step = 101 (0.420 sec) INFO:tensorflow:global_step/sec: 314.293 INFO:tensorflow:loss = 12881.449, step = 201 (0.320 sec) INFO:tensorflow:global_step/sec: 303.863 INFO:tensorflow:loss = 12391.541, step = 301 (0.327 sec) INFO:tensorflow:global_step/sec: 308.782 INFO:tensorflow:loss = 12050.5625, step = 401 (0.326 sec) INFO:tensorflow:global_step/sec: 244.969 INFO:tensorflow:loss = 11766.134, step = 501 (0.407 sec) INFO:tensorflow:global_step/sec: 155.966 INFO:tensorflow:loss = 11509.922, step = 601 (0.641 sec) INFO:tensorflow:global_step/sec: 263.256 INFO:tensorflow:loss = 11272.889, step = 701 (0.379 sec) INFO:tensorflow:global_step/sec: 254.112 INFO:tensorflow:loss = 11051.9795, step = 801 (0.396 sec) INFO:tensorflow:global_step/sec: 292.405 INFO:tensorflow:loss = 10845.855, step = 901 (0.341 sec) INFO:tensorflow:Saving checkpoints for 1000 into train/model.ckpt. INFO:tensorflow:Loss for final step: 5925.9873.

You can check the Tensorboard will the following command:

activate hello-tf # For MacOS tensorboard --logdir=./train # For Windows tensorboard --logdir=train

Step 5) Evaluate your model

You can evaluate the fit of your model on the test set with the code below:

ev = estimator.evaluate( input_fn=get_input_fn(test_set, num_epochs=1, n_batch = 128, shuffle=False)) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2023-05-13-01:43:13 INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Finished evaluation at 2023-05-13-01:43:13 INFO:tensorflow:Saving dict for global step 1000: average_loss = 32.15896, global_step = 1000, loss = 3215.896

You can print the loss with the code below:

loss_score = ev["loss"] print("Loss: {0:f}".format(loss_score)) Output Loss: 3215.895996

The model has a loss of 3215. You can check the summary statistic to get an idea of how big the error is.

training_set['medv'].describe() Output count 400.000000 mean 22.625500 std 9.572593 min 5.000000 25% 16.600000 50% 21.400000 75% 25.025000 max 50.000000 Name: medv, dtype: float64

From the summary statistic above, you know that the average price for a house is 22 thousand, with a minimum price of 9 thousand and maximum of 50 thousand. The model makes a typical error of 3k dollars.

Step 6) Make the prediction

Finally, you can use the estimator TensorFlow predict to estimate the value of 6 Boston houses.

y = estimator.predict( input_fn=get_input_fn(prediction_set, num_epochs=1, n_batch = 128, shuffle=False))

To print the estimated values of , you can use this code:

predictions = list(p["predictions"] for p in itertools.islice(y, 6))print("Predictions: {}".format(str(predictions))) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. Predictions: [array([32.297546], dtype=float32), array([18.96125], dtype=float32), array([27.270979], dtype=float32), array([29.299236], dtype=float32), array([16.436684], dtype=float32), array([21.460876], dtype=float32)]

The model forecast the following values:

House Prediction

1 32.29

2 18.96

3 27.27

4 29.29

5 16.43

7 21.46

Note that we don’t know the true value of . In the tutorial of deep learning, you will try to beat the linear model

Numpy Solution

This section explains how to train the model using a numpy estimator to feed the data. The method is the same exept that you will use numpy_input_fn estimator.

training_set_n = pd.read_csv(“E:/boston_train.csv”).values

test_set_n = pd.read_csv(“E:/boston_test.csv”).values

prediction_set_n = pd.read_csv(“E:/boston_predict.csv”).values

Step 1) Import the data

First of all, you need to differentiate the feature variables from the label. You need to do this for the training data and evaluation. It is faster to define a function to split the data.

def prepare_data(df): X_train = df[:, :-3] y_train = df[:,-3] return X_train, y_train

You can use the function to split the label from the features of the train/evaluate dataset

X_train, y_train = prepare_data(training_set_n) X_test, y_test = prepare_data(test_set_n)

You need to exclude the last column of the prediction dataset because it contains only NaN

x_predict = prediction_set_n[:, :-2]

Confirm the shape of the array. Note that, the label should not have a dimension, it means (400,).

print(X_train.shape, y_train.shape, x_predict.shape) Output (400, 9) (400,) (6, 9)

You can construct the feature columns as follow:

feature_columns = [ tf.feature_column.numeric_column('x', shape=X_train.shape[1:])]

The estimator is defined as before, you instruct the feature columns and where to save the graph.

estimator = tf.estimator.LinearRegressor( feature_columns=feature_columns, model_dir="train1") Output INFO:tensorflow:Using default config.

You can use the numpy estimapor to feed the data to the model and then train the model. Note that, we define the input_fn function before to ease the readability.

# Train the estimatortrain_input = tf.estimator.inputs.numpy_input_fn( x={"x": X_train}, y=y_train, batch_size=128, shuffle=False, num_epochs=None) estimator.train(input_fn = train_input,steps=5000) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 1 into train1/model.ckpt. INFO:tensorflow:loss = 83729.64, step = 1 INFO:tensorflow:global_step/sec: 490.057 INFO:tensorflow:loss = 13909.656, step = 101 (0.206 sec) INFO:tensorflow:global_step/sec: 788.986 INFO:tensorflow:loss = 12881.45, step = 201 (0.126 sec) INFO:tensorflow:global_step/sec: 736.339 INFO:tensorflow:loss = 12391.541, step = 301 (0.136 sec) INFO:tensorflow:global_step/sec: 383.305 INFO:tensorflow:loss = 12050.561, step = 401 (0.260 sec) INFO:tensorflow:global_step/sec: 859.832 INFO:tensorflow:loss = 11766.133, step = 501 (0.117 sec) INFO:tensorflow:global_step/sec: 804.394 INFO:tensorflow:loss = 11509.918, step = 601 (0.125 sec) INFO:tensorflow:global_step/sec: 753.059 INFO:tensorflow:loss = 11272.891, step = 701 (0.134 sec) INFO:tensorflow:global_step/sec: 402.165 INFO:tensorflow:loss = 11051.979, step = 801 (0.248 sec) INFO:tensorflow:global_step/sec: 344.022 INFO:tensorflow:loss = 10845.854, step = 901 (0.288 sec) INFO:tensorflow:Saving checkpoints for 1000 into train1/model.ckpt. INFO:tensorflow:Loss for final step: 5925.985. Out[23]:

You replicate the same step with a different estimator to evaluate your model

eval_input = tf.estimator.inputs.numpy_input_fn( x={"x": X_test}, y=y_test, shuffle=False, batch_size=128, num_epochs=1) estimator.evaluate(eval_input,steps=None) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2023-05-13-01:44:00 INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train1/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Finished evaluation at 2023-05-13-01:44:00 INFO:tensorflow:Saving dict for global step 1000: average_loss = 32.158947, global_step = 1000, loss = 3215.8945 Out[24]: {'average_loss': 32.158947, 'global_step': 1000, 'loss': 3215.8945}

Finaly, you can compute the prediction. It should be the similar as pandas.

test_input = tf.estimator.inputs.numpy_input_fn( x={"x": x_predict}, batch_size=128, num_epochs=1, shuffle=False) y = estimator.predict(test_input) predictions = list(p["predictions"] for p in itertools.islice(y, 6)) print("Predictions: {}".format(str(predictions))) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train1/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. Predictions: [array([32.297546], dtype=float32), array([18.961248], dtype=float32), array([27.270979], dtype=float32), array([29.299242], dtype=float32), array([16.43668], dtype=float32), array([21.460878], dtype=float32)] Tensorflow solution

The last section is dedicated to a TensorFlow solution. This method is sligthly more complicated than the other one.

Note that if you use Jupyter notebook, you need to Restart and clean the kernel to run this session.

TensorFlow has built a great tool to pass the data into the pipeline. In this section, you will build the input_fn function by yourself.

Step 1) Define the path and the format of the data

First of all, you declare two variables with the path of the csv file. Note that, you have two files, one for the training set and one for the testing set.

import tensorflow as tf df_train = "E:/boston_train.csv" df_eval = "E:/boston_test.csv"

Then, you need to define the columns you want to use from the csv file. We will use all. After that, you need to declare the type of variable it is.

Floats variable are defined by [0.]

COLUMNS = ["crim", "zn", "indus", "nox", "rm", "age", "dis", "tax", "ptratio", "medv"]RECORDS_ALL = [[0.0], [0.0], [0.0], [0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0]]

Step 2) Define the input_fn function

The function can be broken into three part:

Import the data

Create the iterator

Consume the data

Below is the overal code to define the function. The code will be explained after

def input_fn(data_file, batch_size, num_epoch = None): # Step 1 def parse_csv(value): columns = tf.decode_csv(value, record_defaults= RECORDS_ALL) features = dict(zip(COLUMNS, columns)) #labels = features.pop('median_house_value') labels = features.pop('medv') return features, labels # Extract lines from input files using the Dataset API. dataset = (tf.data.TextLineDataset(data_file) # Read text file .skip(1) # Skip header row .map(parse_csv)) dataset = dataset.repeat(num_epoch) dataset = dataset.batch(batch_size) # Step 3 iterator = dataset.make_one_shot_iterator() features, labels = iterator.get_next() return features, labels

** Import the data**

This method calls a function that you will create in order to instruct how to transform the chúng tôi a nutshell, you need to pass the data in the TextLineDataset object, exclude the header and apply a transformation which is instructed by a chúng tôi explanation

tf.data.TextLineDataset(data_file): This line read the csv file

.skip(1) : skip the header

.map(parse_csv)): parse the records into the tensorsYou need to define a function to instruct the map object. You can call this function parse_csv.

This function parses the csv file with the method tf.decode_csv and declares the features and the label. The features can be declared as a dictionary or a tuple. You use the dictionary method because it is more chúng tôi explanation

tf.decode_csv(value, record_defaults= RECORDS_ALL): the method decode_csv uses the output of the TextLineDataset to read the csv file. record_defaults instructs TensorFlow about the columns type.

dict(zip(_CSV_COLUMNS, columns)): Populate the dictionary with all the columns extracted during this data processing

features.pop(‘median_house_value’): Exclude the target variable from the feature variable and create a label variable

The Dataset needs further elements to iteratively feeds the Tensors. Indeed, you need to add the method repeat to allow the dataset to continue indefinitely to feed the model. If you don’t add the method, the model will iterate only one time and then throw an error because no more data are fed in the pipeline.

After that, you can control the batch size with the batch method. It means you tell the dataset how many data you want to pass in the pipeline for each iteration. If you set a big batch size, the model will be slow.

Step 3) Create the iterator

Now you are ready for the second step: create an iterator to return the elements in the dataset.

The simplest way of creating an operator is with the method make_one_shot_iterator.

After that, you can create the features and labels from the iterator.

Step 4) Consume the data

You can check what happens with input_fn function. You need to call the function in a session to consume the data. You try with a batch size equals to 1.

Note that, it prints the features in a dictionary and the label as an array.

It will show the first line of the csv file. You can try to run this code many times with different batch size.

next_batch = input_fn(df_train, batch_size = 1, num_epoch = None) with tf.Session() as sess: first_batch = sess.run(next_batch) print(first_batch) Output ({'crim': array([2.3004], dtype=float32), 'zn': array([0.], dtype=float32), 'indus': array([19.58], dtype=float32), 'nox': array([0.605], dtype=float32), 'rm': array([6.319], dtype=float32), 'age': array([96.1], dtype=float32), 'dis': array([2.1], dtype=float32), 'tax': array([403.], dtype=float32), 'ptratio': array([14.7], dtype=float32)}, array([23.8], dtype=float32))

Step 4) Define the feature column

You need to define the numeric columns as follow:

X1= tf.feature_column.numeric_column('crim') X2= tf.feature_column.numeric_column('zn') X3= tf.feature_column.numeric_column('indus') X4= tf.feature_column.numeric_column('nox') X5= tf.feature_column.numeric_column('rm') X6= tf.feature_column.numeric_column('age') X7= tf.feature_column.numeric_column('dis') X8= tf.feature_column.numeric_column('tax') X9= tf.feature_column.numeric_column('ptratio')

Note that you need to combined all the variables in a bucket

base_columns = [X1, X2, X3,X4, X5, X6,X7, X8, X9]

Step 5) Build the model

You can train the model with the estimator LinearRegressor.

model = tf.estimator.LinearRegressor(feature_columns=base_columns, model_dir='train3') Output

You need to use a lambda function to allow to write the argument in the function inpu_fn. If you don’t use a lambda function, you cannot train the model.

# Train the estimatormodel.train(steps =1000, input_fn= lambda : input_fn(df_train,batch_size=128, num_epoch = None))

Output

INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 1 into train3/model.ckpt. INFO:tensorflow:loss = 83729.64, step = 1 INFO:tensorflow:global_step/sec: 72.5646 INFO:tensorflow:loss = 13909.657, step = 101 (1.380 sec) INFO:tensorflow:global_step/sec: 101.355 INFO:tensorflow:loss = 12881.449, step = 201 (0.986 sec) INFO:tensorflow:global_step/sec: 109.293 INFO:tensorflow:loss = 12391.541, step = 301 (0.915 sec) INFO:tensorflow:global_step/sec: 102.235 INFO:tensorflow:loss = 12050.5625, step = 401 (0.978 sec) INFO:tensorflow:global_step/sec: 104.656 INFO:tensorflow:loss = 11766.134, step = 501 (0.956 sec) INFO:tensorflow:global_step/sec: 106.697 INFO:tensorflow:loss = 11509.922, step = 601 (0.938 sec) INFO:tensorflow:global_step/sec: 118.454 INFO:tensorflow:loss = 11272.889, step = 701 (0.844 sec) INFO:tensorflow:global_step/sec: 114.947 INFO:tensorflow:loss = 11051.9795, step = 801 (0.870 sec) INFO:tensorflow:global_step/sec: 111.484 INFO:tensorflow:loss = 10845.855, step = 901 (0.897 sec) INFO:tensorflow:Saving checkpoints for 1000 into train3/model.ckpt. INFO:tensorflow:Loss for final step: 5925.9873. Out[8]:

You can evaluate the fit of you model on the test set with the code below:

results = model.evaluate(steps =None,input_fn=lambda: input_fn(df_eval, batch_size =128, num_epoch = 1)) for key in results: print(" {}, was: {}".format(key, results[key])) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2023-05-13-02:06:02 INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train3/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Finished evaluation at 2023-05-13-02:06:02 INFO:tensorflow:Saving dict for global step 1000: average_loss = 32.15896, global_step = 1000, loss = 3215.896 average_loss, was: 32.158958435058594 loss, was: 3215.89599609375 global_step, was: 1000

The last step is predicting the value of based on the value of , the matrices of the features. You can write a dictionary with the values you want to predict. Your model has 9 features so you need to provide a value for each. The model will provide a prediction for each of them.

In the code below, you wrote the values of each features that is contained in the df_predict csv file.

You need to write a new input_fn function because there is no label in the dataset. You can use the API from_tensor from the Dataset.

prediction_input = { 'crim': [0.03359,5.09017,0.12650,0.05515,8.15174,0.24522], 'zn': [75.0,0.0,25.0,33.0,0.0,0.0], 'indus': [2.95,18.10,5.13,2.18,18.10,9.90], 'nox': [0.428,0.713,0.453,0.472,0.700,0.544], 'rm': [7.024,6.297,6.762,7.236,5.390,5.782], 'age': [15.8,91.8,43.4,41.1,98.9,71.7], 'dis': [5.4011,2.3682,7.9809,4.0220,1.7281,4.0317], 'tax': [252,666,284,222,666,304], 'ptratio': [18.3,20.2,19.7,18.4,20.2,18.4] } def test_input_fn(): dataset = tf.data.Dataset.from_tensors(prediction_input) return dataset # Predict all our prediction_inputpred_results = model.predict(input_fn=test_input_fn)

Finaly, you print the predictions.

for pred in enumerate(pred_results): print(pred) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train3/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. (0, {'predictions': array([32.297546], dtype=float32)}) (1, {'predictions': array([18.96125], dtype=float32)}) (2, {'predictions': array([27.270979], dtype=float32)}) (3, {'predictions': array([29.299236], dtype=float32)}) (4, {'predictions': array([16.436684], dtype=float32)}) (5, {'predictions': array([21.460876], dtype=float32)}) INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train3/model.ckpt-5000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. (0, {'predictions': array([35.60663], dtype=float32)}) (1, {'predictions': array([22.298521], dtype=float32)}) (2, {'predictions': array([25.74533], dtype=float32)}) (3, {'predictions': array([35.126694], dtype=float32)}) (4, {'predictions': array([17.94416], dtype=float32)}) (5, {'predictions': array([22.606628], dtype=float32)}) Summary

To train a model, you need to:

Define the features: Independent variables: X

Define the label: Dependent variable: y

Construct a train/test set

Define the initial weight

Define the loss function: MSE

Optimize the model: Gradient descent

Define:

Learning rate

Number of epoch

Batch size

In this tutorial, you learned how to use the high level API for a linear regression TensorFlow estimator. You need to define:

Feature columns. If continuous: tf.feature_column.numeric_column(). You can populate a list with python list comprehension

The estimator: tf.estimator.LinearRegressor(feature_columns, model_dir)

A function to import the data, the batch size and epoch: input_fn()

After that, you are ready to train, evaluate and make prediction with train(), evaluate() and predict()

How To Work Haskell Map With Examples?

Introduction to Haskell Map

Whenever we want to apply a function on each element of a given list and produce a new list consisting of the updated elements, then we make use of a function called map() function in Haskell and this map() function takes a list and the function to be applied on each element in the list as an input and returns a new list as the output and this map() function is available in Data. Map module and the internal implementation of map is a balanced binary tree and this is a very efficient representation in Haskell programming language when compared to the other implementations such as Hash table.

Start Your Free Software Development Course

The syntax to define map in Haskell is as follows:

How does Map work in Haskell?

Whenever we want to apply a function on each element of a given list and produce a new list consisting of the updated elements, then we make use of a function called map() function in Haskell.

The map() function takes two parameters namely a list and the function to be applied on each element in the list and returns a new list as the output.

The map() function is available in Data. Map module in Haskell programming language.

The internal implementation of map is a balanced binary tree and this is a very efficient representation in Haskell programming language when compared to the other implementations such as Hash table.

Examples

Lets us discuss some of the examples.

Example #1

Haskell program to demonstrate map function using which we are adding 2 to each element in the given list and display the resulting new list as the output on the screen:

--defining a main function in which we are using the map function on a list to add 2 to each element in the list and display the resulting new list as the output on the screen main  = do let new = map (+2) [10, 20, 30, 40, 50] putStrLn "The elements in the new list after using map function is:n" print $ new

The output of the above program is as shown in the snapshot below:

In the above program, we are defining a main function within which we are using the map function on a given list to add 2 to each element in the list and display the resulting list as the output on the screen.

Example #2 --defining a main function in which we are using the map function on a list to multiply each element in the given list by 2 and display the resulting new list as the output on the screen main  = do let new = map (*2) [10, 20, 30, 40, 50] putStrLn "The elements in the new list after using map function is:n" print $ new

The output of the above program is as shown in the snapshot below:

In the above program, we are defining a main function within which we are using the map function on a given list to multiply each element in the list by 2 and display the resulting list as the output on the screen.

Example #3

Haskell program to demonstrate map function using which we divide each element in the given list by 2 and display the resulting new list as the output on the screen:

--defining a main function in which we are using the map function on a list to divide each element in the given list by 2 and display the resulting new list as the output on the screen main  = do let new = map (/2) [10, 20, 30, 40, 50] putStrLn "The elements in the new list after using map function is:n" print $ new

The output of the above program is as shown in the snapshot below:

In the above program, we are defining a main function within which we are using the map function on a given list to divide each element in the list by 2 and display the resulting list as the output on the screen.

Example #4

Haskell program to demonstrate map function using which we subtract each element in the given list by 2 and display the resulting new list as the output on the screen:

--defining a main function in which we are using the map function on a list to subtract each element in the given list by 2 and display the resulting new list as the output on the screen main  = do let new = map (2-) [10, 20, 30, 40, 50] putStrLn "The elements in the new list after using map function is:n"

The output of the above program is as shown in the snapshot below:

In the above program, we are defining a main function within which we are using the map function on a given list to subtract each element in the list by 2 and display the resulting list as the output on the screen.

Conclusion

In this article, we have learned the concept of a map in Haskell programming language through the definition, syntax, and working with corresponding programming examples and their outputs to demonstrate them.

Recommended Articles

We hope that this EDUCBA information on “Haskell Map” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

How To Set Up An External Hard Drive For Use With Mac Os X

When you first attach a hard drive to your Mac, it should automatically mount and be ready to use; however before relying on it, you should consider taking a couple of precautionary steps to ensure that the drive continues to work as expected.

Note: This guide is for those whose drive isn’t really working with their Mac, or those who want to set up their drive to work specifically work on OS X. By default, most drives should work with both Windows and OS X unless specified otherwise.)

By default, if you got a new external hard disk and you have not done anything to it, it will probably in the FAT32 format. This format will work fine on Mac, but it does have some limitations. For starters, FAT32 lacks journaling support which would help prevent data corruption, and lack of support for various filesystem permission. In addition, FAT32 drives usually come with the Master Boot Record partition scheme, which does not work with Apple’s CoreStorage routines, and therefore will not allow OS-supported encryption of the drive (among other customizations).

If your external hard drive is not working as expected, or you need it to be in Mac-specific format, here’s how to set up your hard drive for use with Mac OS X:

To begin, be sure to format your drive. To format the drive, attach the external hard drive to your system and open Disk Utility, and then perform the following steps:

1. Select your drive device in the list of devices in the left-hand pane, which is the item above any storage volumes on the drive, and which may show the manufacturer name, media size, and so on.

2. Choose the “Partition” tab the appears.

3. Select “1 Partition” from the drop-down menu (or more, if you have specific need for more than one volume). When you select a new partition layout from the drop-down menu, each new partition will automatically be formatted to Mac OS Extended (Journaled) by default, but be sure to double-check this by selecting each in the partition diagram and then choosing the format for it.

Once you have completed all the above steps, the drive should unmount and remount with the new formatting settings, and should now be ready for use. Generally a format of the drive in this manner is all that is needed; however, some people may wish to test drives further to make sure the media does not contain any bad blocks or other errors beyond the scope of the drive’s formatting.

Testing out the newly formatted hard drive

Shujaa Imran

Shujaa Imran is MakeTechEasier’s resident Mac tutorial writer. He’s currently training to follow his other passion become a commercial pilot. You can check his content out on Youtube

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

Update the detailed information about Introduction To Saliency Map In An Image With Tensorflow 2.X Api on the Eastwest.edu.vn website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!