Trending February 2024 # Linear Regression Tutorial With Tensorflow # Suggested March 2024 # Top 2 Popular

You are reading the article Linear Regression Tutorial With Tensorflow updated in February 2024 on the website Eastwest.edu.vn. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Linear Regression Tutorial With Tensorflow

What is Linear Regression?

Linear Regression is an approach in statistics for modelling relationships between two variables. This modelling is done between a scalar response and one or more explanatory variables. The relationship with one explanatory variable is called simple linear regression and for more than one explanatory variables, it is called multiple linear regression.

TensorFlow provides tools to have full control of the computations. This is done with the low-level API. On top of that, TensorFlow is equipped with a vast array of APIs to perform many machine learning algorithms. This is the high-level API. TensorFlow calls them estimators

Low-level API: Build the architecture, optimization of the model from scratch. It is complicated for a beginner

High-level API: Define the algorithm. It is easer-friendly. TensorFlow provides a toolbox called estimator to construct, train, evaluate and make a prediction.

In this tutorial, you will use the estimators only. The computations are faster and are easier to implement. The first part of the tutorial explains how to use the gradient descent optimizer to train a Linear regression in TensorFlow. In a second part, you will use the Boston dataset to predict the price of a house using TensorFlow estimator.

Download Boston DataSet

In this TensorFlow Regression tutorial, you will learn:

How to train a linear regression model

Before we begin to train the model, let’s have a look at what is a linear regression.

Imagine you have two variables, x and y and your task is to predict the value of knowing the value of . If you plot the data, you can see a positive relationship between your independent variable, x and your dependent variable y.

You may observe, if x=1,y will roughly be equal to 6 and if x=2,y will be around 8.5.

This is not a very accurate method and prone to error, especially with a dataset with hundreds of thousands of points.

A linear regression is evaluated with an equation. The variable y is explained by one or many covariates. In your example, there is only one dependent variable. If you have to write this equation, it will be:

With:

is the bias. i.e. if x=0, y=

is the weight associated to x

is the residual or the error of the model. It includes what the model cannot learn from the data

Imagine you fit the model and you find the following solution for:

= 3.8

= 2.78

You can substitute those numbers in the equation and it becomes:

y= 3.8 + 2.78x

You have now a better way to find the values for y. That is, you can replace x with any value you want to predict y. In the image below, we have replace x in the equation with all the values in the dataset and plot the result.

The red line represents the fitted value, that is the values of y for each value of x. You don’t need to see the value of x to predict y, for each x there is any which belongs to the red line. You can also predict for values of x higher than 2!

If you want to extend the linear regression to more covariates, you can by adding more variables to the model. The difference between traditional analysis and linear regression is the linear regression looks at how y will react for each variable x taken independently.

Let’s see an example. Imagine you want to predict the sales of an ice cream shop. The dataset contains different information such as the weather (i.e rainy, sunny, cloudy), customer informations (i.e salary, gender, marital status).

Traditional analysis will try to predict the sale by let’s say computing the average for each variable and try to estimate the sale for different scenarios. It will lead to poor predictions and restrict the analysis to the chosen scenario.

If you use linear regression, you can write this equation:

The algorithm will find the best solution for the weights; it means it will try to minimize the cost (the difference between the fitted line and the data points).

How the algorithm works

The algorithm will choose a random number for each

and and replace the value of x to get the predicted value of y. If the dataset has 100 observations, the algorithm computes 100 predicted values.

andand replace the value of x to get the predicted value of y. If the dataset has 100 observations, the algorithm computes 100 predicted values.

We can compute the error, noted

of the model, which is the difference between the predicted value and the real value. A positive error means the model underestimates the prediction of y, and a negative error means the model overestimates the prediction of y.

Your goal is to minimize the square of the error. The algorithm computes the mean of the square error. This step is called minimization of the error. For linear regression is the Mean Square Error, also called MSE. Mathematically, it is:

Where:

is the weights so refers to the predicted value

y is the real values

m is the number of observations

Note that

means it uses the transpose of the matrices. The is the mathematical notation of the mean.

means it uses the transpose of the matrices. Theis the mathematical notation of the mean.

The goal is to find the best

that minimize the MSE

that minimize the MSE

If the average error is large, it means the model performs poorly and the weights are not chosen properly. To correct the weights, you need to use an optimizer. The traditional optimizer is called Gradient Descent.

The gradient descent takes the derivative and decreases or increases the weight. If the derivative is positive, the weight is decreased. If the derivative is negative, the weight increases. The model will update the weights and recompute the error. This process is repeated until the error does not change anymore. Each process is called an iteration. Besides, the gradients are multiplied by a learning rate. It indicates the speed of the learning.

If the learning rate is too small, it will take very long time for the algorithm to converge (i.e requires lots of iterations). If the learning rate is too high, the algorithm might never converge.

You can see from the picture above, the model repeats the process about 20 times before to find a stable value for the weights, therefore reaching the lowest error.

Note that, the error is not equal to zero but stabilizes around 5. It means, the model makes a typical error of 5. If you want to reduce the error, you need to add more information to the model such as more variables or use different estimators.

You remember the first equation

The final weights are 3.8 and 2.78. The video below shows you how the gradient descent optimize the loss function to find this weights

How to train a Linear Regression with TensorFlow

Now that you have a better understanding of what is happening behind the hood, you are ready to use the estimator API provided by TensorFlow to train your first linear regression using TensorFlow.

You will use the Boston Dataset, which includes the following variables

crim per capita crime rate by town

zn proportion of residential land zoned for lots over 25,000 sq.ft.

indus proportion of non-retail business acres per town.

nox nitric oxides concentration

rm average number of rooms per dwelling

age proportion of owner-occupied units built before 1940

dis weighted distances to five Boston employment centers

tax full-value property-tax rate per dollars 10,000

ptratio pupil-teacher ratio by town

medv Median value of owner-occupied homes in thousand dollars

You will create three different datasets:

dataset objective shape

Training Train the model and obtain the weights 400, 10

Evaluation Evaluate the performance of the model on unseen data 100, 10

Predict Use the model to predict house value on new data 6, 10

The objectives is to use the features of the dataset to predict the value of the house.

During the second part of the tutorial, you will learn how to use TensorFlow with three different way to import the data:

With Pandas

With Numpy

Only TF

Note that, all options provide the same results.

You will learn how to use the high-level API to build, train an evaluate a TensorFlow linear regression model. If you were using the low-level API, you had to define by hand the:

Loss function

Optimize: Gradient descent

Matrices multiplication

Graph and tensor

This is tedious and more complicated for beginner.

Pandas

You need to import the necessary libraries to train the model.

import pandas as pd from sklearn import datasets import tensorflow as tf import itertools

Step 1) Import the data with panda.

You define the column names and store it in COLUMNS. You can use pd.read_csv() to import the data.

COLUMNS = ["crim", "zn", "indus", "nox", "rm", "age", "dis", "tax", "ptratio", "medv"]

training_set = pd.read_csv(“E:/boston_train.csv”, skipinitialspace=True,skiprows=1, names=COLUMNS)

test_set = pd.read_csv(“E:/boston_test.csv”, skipinitialspace=True,skiprows=1, names=COLUMNS)

prediction_set = pd.read_csv(“E:/boston_predict.csv”, skipinitialspace=True,skiprows=1, names=COLUMNS)

You can print the shape of the data.

print(training_set.shape, test_set.shape, prediction_set.shape) Output (400, 10) (100, 10) (6, 10)

Note that the label, i.e. your y, is included in the dataset. So you need to define two other lists. One containing only the features and one with the name of the label only. These two lists will tell your estimator what are the features in the dataset and what column name is the label

It is done with the code below.

FEATURES = ["crim", "zn", "indus", "nox", "rm", "age", "dis", "tax", "ptratio"] LABEL = "medv"

Step 2) Convert the data

You need to convert the numeric variables in the proper format. Tensorflow provides a method to convert continuous variable: tf.feature_column.numeric_column().

In the previous step, you define a list a feature you want to include in the model. Now you can use this list to convert them into numeric data. If you want to exclude features in your model, feel free to drop one or more variables in the list FEATURES before you construct the feature_cols

Note that you will use Python list comprehension with the list FEATURES to create a new list named feature_cols. It helps you avoid writing nine times tf.feature_column.numeric_column(). A list comprehension is a faster and cleaner way to create new lists

feature_cols = [tf.feature_column.numeric_column(k) for k in FEATURES]

Step 3) Define the estimator

In this step, you need to define the estimator. Tensorflow currently provides 6 pre-built estimators, including 3 for classification task and 3 for TensorFlow regression task:

Regressor

DNNRegressor

LinearRegressor

DNNLineaCombinedRegressor

Classifier

DNNClassifier

LinearClassifier

DNNLineaCombinedClassifier

In this tutorial, you will use the Linear Regressor. To access this function, you need to use tf.estimator.

The function needs two arguments:

feature_columns: Contains the variables to include in the model

model_dir: path to store the graph, save the model parameters, etc

Tensorflow will automatically create a file named train in your working directory. You need to use this path to access the Tensorboard as shown in the below TensorFlow regression example.

estimator = tf.estimator.LinearRegressor( feature_columns=feature_cols, model_dir="train") Output INFO:tensorflow:Using default config.

The tricky part with TensorFlow is the way to feed the model. Tensorflow is designed to work with parallel computing and very large dataset. Due to the limitation of the machine resources, it is impossible to feed the model with all the data at once. For that, you need to feed a batch of data each time. Note that, we are talking about huge dataset with millions or more records. If you don’t add batch, you will end up with a memory error.

For instance, if your data contains 100 observations and you define a batch size of 10, it means the model will see 10 observations for each iteration (10*10).

When the model has seen all the data, it finishes one epoch. An epoch defines how many times you want the model to see the data. It is better to set this step to none and let the model performs iteration number of time.

A second information to add is if you want to shuffle the data before each iteration. During the training, it is important to shuffle the data so that the model does not learn specific pattern of the dataset. If the model learns the details of the underlying pattern of the data, it will have difficulties to generalize the prediction for unseen data. This is called overfitting. The model performs well on the training data but cannot predict correctly for unseen data.

TensorFlow makes this two steps easy to do. When the data goes to the pipeline, it knows how many observations it needs (batch) and if it has to shuffle the data.

To instruct Tensorflow how to feed the model, you can use pandas_input_fn. This object needs 5 parameters:

x: feature data

y: label data

batch_size: batch. By default 128

num_epoch: Number of epoch, by default 1

shuffle: Shuffle or not the data. By default, None

You need to feed the model many times so you define a function to repeat this process. all this function get_input_fn.

def get_input_fn(data_set, num_epochs=None, n_batch = 128, shuffle=True): return tf.estimator.inputs.pandas_input_fn( x=pd.DataFrame({k: data_set[k].values for k in FEATURES}), y = pd.Series(data_set[LABEL].values), batch_size=n_batch, num_epochs=num_epochs, shuffle=shuffle)

The usual method to evaluate the performance of a model is to:

Train the model

Evaluate the model in a different dataset

Make prediction

Tensorflow estimator provides three different functions to carry out this three steps easily.

Step 4): Train the model

You can use the estimator train to evaluate the model. The train estimator needs an input_fn and a number of steps. You can use the function you created above to feed the model. Then, you instruct the model to iterate 1000 times. Note that, you don’t specify the number of epochs, you let the model iterates 1000 times. If you set the number of epoch to 1, then the model will iterate 4 times: There are 400 records in the training set, and the batch size is 128

128 rows

128 rows

128 rows

16 rows

Therefore, it is easier to set the number of epoch to none and define the number of iteration as shown in th below TensorFlow classification example.

estimator.train(input_fn=get_input_fn(training_set, num_epochs=None, n_batch = 128, shuffle=False), steps=1000) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 1 into train/model.ckpt. INFO:tensorflow:loss = 83729.64, step = 1 INFO:tensorflow:global_step/sec: 238.616 INFO:tensorflow:loss = 13909.657, step = 101 (0.420 sec) INFO:tensorflow:global_step/sec: 314.293 INFO:tensorflow:loss = 12881.449, step = 201 (0.320 sec) INFO:tensorflow:global_step/sec: 303.863 INFO:tensorflow:loss = 12391.541, step = 301 (0.327 sec) INFO:tensorflow:global_step/sec: 308.782 INFO:tensorflow:loss = 12050.5625, step = 401 (0.326 sec) INFO:tensorflow:global_step/sec: 244.969 INFO:tensorflow:loss = 11766.134, step = 501 (0.407 sec) INFO:tensorflow:global_step/sec: 155.966 INFO:tensorflow:loss = 11509.922, step = 601 (0.641 sec) INFO:tensorflow:global_step/sec: 263.256 INFO:tensorflow:loss = 11272.889, step = 701 (0.379 sec) INFO:tensorflow:global_step/sec: 254.112 INFO:tensorflow:loss = 11051.9795, step = 801 (0.396 sec) INFO:tensorflow:global_step/sec: 292.405 INFO:tensorflow:loss = 10845.855, step = 901 (0.341 sec) INFO:tensorflow:Saving checkpoints for 1000 into train/model.ckpt. INFO:tensorflow:Loss for final step: 5925.9873.

You can check the Tensorboard will the following command:

activate hello-tf # For MacOS tensorboard --logdir=./train # For Windows tensorboard --logdir=train

Step 5) Evaluate your model

You can evaluate the fit of your model on the test set with the code below:

ev = estimator.evaluate( input_fn=get_input_fn(test_set, num_epochs=1, n_batch = 128, shuffle=False)) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2023-05-13-01:43:13 INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Finished evaluation at 2023-05-13-01:43:13 INFO:tensorflow:Saving dict for global step 1000: average_loss = 32.15896, global_step = 1000, loss = 3215.896

You can print the loss with the code below:

loss_score = ev["loss"] print("Loss: {0:f}".format(loss_score)) Output Loss: 3215.895996

The model has a loss of 3215. You can check the summary statistic to get an idea of how big the error is.

training_set['medv'].describe() Output count 400.000000 mean 22.625500 std 9.572593 min 5.000000 25% 16.600000 50% 21.400000 75% 25.025000 max 50.000000 Name: medv, dtype: float64

From the summary statistic above, you know that the average price for a house is 22 thousand, with a minimum price of 9 thousand and maximum of 50 thousand. The model makes a typical error of 3k dollars.

Step 6) Make the prediction

Finally, you can use the estimator TensorFlow predict to estimate the value of 6 Boston houses.

y = estimator.predict( input_fn=get_input_fn(prediction_set, num_epochs=1, n_batch = 128, shuffle=False))

To print the estimated values of , you can use this code:

predictions = list(p["predictions"] for p in itertools.islice(y, 6))print("Predictions: {}".format(str(predictions))) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. Predictions: [array([32.297546], dtype=float32), array([18.96125], dtype=float32), array([27.270979], dtype=float32), array([29.299236], dtype=float32), array([16.436684], dtype=float32), array([21.460876], dtype=float32)]

The model forecast the following values:

House Prediction

1 32.29

2 18.96

3 27.27

4 29.29

5 16.43

7 21.46

Note that we don’t know the true value of . In the tutorial of deep learning, you will try to beat the linear model

Numpy Solution

This section explains how to train the model using a numpy estimator to feed the data. The method is the same exept that you will use numpy_input_fn estimator.

training_set_n = pd.read_csv(“E:/boston_train.csv”).values

test_set_n = pd.read_csv(“E:/boston_test.csv”).values

prediction_set_n = pd.read_csv(“E:/boston_predict.csv”).values

Step 1) Import the data

First of all, you need to differentiate the feature variables from the label. You need to do this for the training data and evaluation. It is faster to define a function to split the data.

def prepare_data(df): X_train = df[:, :-3] y_train = df[:,-3] return X_train, y_train

You can use the function to split the label from the features of the train/evaluate dataset

X_train, y_train = prepare_data(training_set_n) X_test, y_test = prepare_data(test_set_n)

You need to exclude the last column of the prediction dataset because it contains only NaN

x_predict = prediction_set_n[:, :-2]

Confirm the shape of the array. Note that, the label should not have a dimension, it means (400,).

print(X_train.shape, y_train.shape, x_predict.shape) Output (400, 9) (400,) (6, 9)

You can construct the feature columns as follow:

feature_columns = [ tf.feature_column.numeric_column('x', shape=X_train.shape[1:])]

The estimator is defined as before, you instruct the feature columns and where to save the graph.

estimator = tf.estimator.LinearRegressor( feature_columns=feature_columns, model_dir="train1") Output INFO:tensorflow:Using default config.

You can use the numpy estimapor to feed the data to the model and then train the model. Note that, we define the input_fn function before to ease the readability.

# Train the estimatortrain_input = tf.estimator.inputs.numpy_input_fn( x={"x": X_train}, y=y_train, batch_size=128, shuffle=False, num_epochs=None) estimator.train(input_fn = train_input,steps=5000) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 1 into train1/model.ckpt. INFO:tensorflow:loss = 83729.64, step = 1 INFO:tensorflow:global_step/sec: 490.057 INFO:tensorflow:loss = 13909.656, step = 101 (0.206 sec) INFO:tensorflow:global_step/sec: 788.986 INFO:tensorflow:loss = 12881.45, step = 201 (0.126 sec) INFO:tensorflow:global_step/sec: 736.339 INFO:tensorflow:loss = 12391.541, step = 301 (0.136 sec) INFO:tensorflow:global_step/sec: 383.305 INFO:tensorflow:loss = 12050.561, step = 401 (0.260 sec) INFO:tensorflow:global_step/sec: 859.832 INFO:tensorflow:loss = 11766.133, step = 501 (0.117 sec) INFO:tensorflow:global_step/sec: 804.394 INFO:tensorflow:loss = 11509.918, step = 601 (0.125 sec) INFO:tensorflow:global_step/sec: 753.059 INFO:tensorflow:loss = 11272.891, step = 701 (0.134 sec) INFO:tensorflow:global_step/sec: 402.165 INFO:tensorflow:loss = 11051.979, step = 801 (0.248 sec) INFO:tensorflow:global_step/sec: 344.022 INFO:tensorflow:loss = 10845.854, step = 901 (0.288 sec) INFO:tensorflow:Saving checkpoints for 1000 into train1/model.ckpt. INFO:tensorflow:Loss for final step: 5925.985. Out[23]:

You replicate the same step with a different estimator to evaluate your model

eval_input = tf.estimator.inputs.numpy_input_fn( x={"x": X_test}, y=y_test, shuffle=False, batch_size=128, num_epochs=1) estimator.evaluate(eval_input,steps=None) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2023-05-13-01:44:00 INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train1/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Finished evaluation at 2023-05-13-01:44:00 INFO:tensorflow:Saving dict for global step 1000: average_loss = 32.158947, global_step = 1000, loss = 3215.8945 Out[24]: {'average_loss': 32.158947, 'global_step': 1000, 'loss': 3215.8945}

Finaly, you can compute the prediction. It should be the similar as pandas.

test_input = tf.estimator.inputs.numpy_input_fn( x={"x": x_predict}, batch_size=128, num_epochs=1, shuffle=False) y = estimator.predict(test_input) predictions = list(p["predictions"] for p in itertools.islice(y, 6)) print("Predictions: {}".format(str(predictions))) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train1/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. Predictions: [array([32.297546], dtype=float32), array([18.961248], dtype=float32), array([27.270979], dtype=float32), array([29.299242], dtype=float32), array([16.43668], dtype=float32), array([21.460878], dtype=float32)] Tensorflow solution

The last section is dedicated to a TensorFlow solution. This method is sligthly more complicated than the other one.

Note that if you use Jupyter notebook, you need to Restart and clean the kernel to run this session.

TensorFlow has built a great tool to pass the data into the pipeline. In this section, you will build the input_fn function by yourself.

Step 1) Define the path and the format of the data

First of all, you declare two variables with the path of the csv file. Note that, you have two files, one for the training set and one for the testing set.

import tensorflow as tf df_train = "E:/boston_train.csv" df_eval = "E:/boston_test.csv"

Then, you need to define the columns you want to use from the csv file. We will use all. After that, you need to declare the type of variable it is.

Floats variable are defined by [0.]

COLUMNS = ["crim", "zn", "indus", "nox", "rm", "age", "dis", "tax", "ptratio", "medv"]RECORDS_ALL = [[0.0], [0.0], [0.0], [0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0]]

Step 2) Define the input_fn function

The function can be broken into three part:

Import the data

Create the iterator

Consume the data

Below is the overal code to define the function. The code will be explained after

def input_fn(data_file, batch_size, num_epoch = None): # Step 1 def parse_csv(value): columns = tf.decode_csv(value, record_defaults= RECORDS_ALL) features = dict(zip(COLUMNS, columns)) #labels = features.pop('median_house_value') labels = features.pop('medv') return features, labels # Extract lines from input files using the Dataset API. dataset = (tf.data.TextLineDataset(data_file) # Read text file .skip(1) # Skip header row .map(parse_csv)) dataset = dataset.repeat(num_epoch) dataset = dataset.batch(batch_size) # Step 3 iterator = dataset.make_one_shot_iterator() features, labels = iterator.get_next() return features, labels

** Import the data**

This method calls a function that you will create in order to instruct how to transform the chúng tôi a nutshell, you need to pass the data in the TextLineDataset object, exclude the header and apply a transformation which is instructed by a chúng tôi explanation

tf.data.TextLineDataset(data_file): This line read the csv file

.skip(1) : skip the header

.map(parse_csv)): parse the records into the tensorsYou need to define a function to instruct the map object. You can call this function parse_csv.

This function parses the csv file with the method tf.decode_csv and declares the features and the label. The features can be declared as a dictionary or a tuple. You use the dictionary method because it is more chúng tôi explanation

tf.decode_csv(value, record_defaults= RECORDS_ALL): the method decode_csv uses the output of the TextLineDataset to read the csv file. record_defaults instructs TensorFlow about the columns type.

dict(zip(_CSV_COLUMNS, columns)): Populate the dictionary with all the columns extracted during this data processing

features.pop(‘median_house_value’): Exclude the target variable from the feature variable and create a label variable

The Dataset needs further elements to iteratively feeds the Tensors. Indeed, you need to add the method repeat to allow the dataset to continue indefinitely to feed the model. If you don’t add the method, the model will iterate only one time and then throw an error because no more data are fed in the pipeline.

After that, you can control the batch size with the batch method. It means you tell the dataset how many data you want to pass in the pipeline for each iteration. If you set a big batch size, the model will be slow.

Step 3) Create the iterator

Now you are ready for the second step: create an iterator to return the elements in the dataset.

The simplest way of creating an operator is with the method make_one_shot_iterator.

After that, you can create the features and labels from the iterator.

Step 4) Consume the data

You can check what happens with input_fn function. You need to call the function in a session to consume the data. You try with a batch size equals to 1.

Note that, it prints the features in a dictionary and the label as an array.

It will show the first line of the csv file. You can try to run this code many times with different batch size.

next_batch = input_fn(df_train, batch_size = 1, num_epoch = None) with tf.Session() as sess: first_batch = sess.run(next_batch) print(first_batch) Output ({'crim': array([2.3004], dtype=float32), 'zn': array([0.], dtype=float32), 'indus': array([19.58], dtype=float32), 'nox': array([0.605], dtype=float32), 'rm': array([6.319], dtype=float32), 'age': array([96.1], dtype=float32), 'dis': array([2.1], dtype=float32), 'tax': array([403.], dtype=float32), 'ptratio': array([14.7], dtype=float32)}, array([23.8], dtype=float32))

Step 4) Define the feature column

You need to define the numeric columns as follow:

X1= tf.feature_column.numeric_column('crim') X2= tf.feature_column.numeric_column('zn') X3= tf.feature_column.numeric_column('indus') X4= tf.feature_column.numeric_column('nox') X5= tf.feature_column.numeric_column('rm') X6= tf.feature_column.numeric_column('age') X7= tf.feature_column.numeric_column('dis') X8= tf.feature_column.numeric_column('tax') X9= tf.feature_column.numeric_column('ptratio')

Note that you need to combined all the variables in a bucket

base_columns = [X1, X2, X3,X4, X5, X6,X7, X8, X9]

Step 5) Build the model

You can train the model with the estimator LinearRegressor.

model = tf.estimator.LinearRegressor(feature_columns=base_columns, model_dir='train3') Output

You need to use a lambda function to allow to write the argument in the function inpu_fn. If you don’t use a lambda function, you cannot train the model.

# Train the estimatormodel.train(steps =1000, input_fn= lambda : input_fn(df_train,batch_size=128, num_epoch = None))

Output

INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 1 into train3/model.ckpt. INFO:tensorflow:loss = 83729.64, step = 1 INFO:tensorflow:global_step/sec: 72.5646 INFO:tensorflow:loss = 13909.657, step = 101 (1.380 sec) INFO:tensorflow:global_step/sec: 101.355 INFO:tensorflow:loss = 12881.449, step = 201 (0.986 sec) INFO:tensorflow:global_step/sec: 109.293 INFO:tensorflow:loss = 12391.541, step = 301 (0.915 sec) INFO:tensorflow:global_step/sec: 102.235 INFO:tensorflow:loss = 12050.5625, step = 401 (0.978 sec) INFO:tensorflow:global_step/sec: 104.656 INFO:tensorflow:loss = 11766.134, step = 501 (0.956 sec) INFO:tensorflow:global_step/sec: 106.697 INFO:tensorflow:loss = 11509.922, step = 601 (0.938 sec) INFO:tensorflow:global_step/sec: 118.454 INFO:tensorflow:loss = 11272.889, step = 701 (0.844 sec) INFO:tensorflow:global_step/sec: 114.947 INFO:tensorflow:loss = 11051.9795, step = 801 (0.870 sec) INFO:tensorflow:global_step/sec: 111.484 INFO:tensorflow:loss = 10845.855, step = 901 (0.897 sec) INFO:tensorflow:Saving checkpoints for 1000 into train3/model.ckpt. INFO:tensorflow:Loss for final step: 5925.9873. Out[8]:

You can evaluate the fit of you model on the test set with the code below:

results = model.evaluate(steps =None,input_fn=lambda: input_fn(df_eval, batch_size =128, num_epoch = 1)) for key in results: print(" {}, was: {}".format(key, results[key])) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2023-05-13-02:06:02 INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train3/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Finished evaluation at 2023-05-13-02:06:02 INFO:tensorflow:Saving dict for global step 1000: average_loss = 32.15896, global_step = 1000, loss = 3215.896 average_loss, was: 32.158958435058594 loss, was: 3215.89599609375 global_step, was: 1000

The last step is predicting the value of based on the value of , the matrices of the features. You can write a dictionary with the values you want to predict. Your model has 9 features so you need to provide a value for each. The model will provide a prediction for each of them.

In the code below, you wrote the values of each features that is contained in the df_predict csv file.

You need to write a new input_fn function because there is no label in the dataset. You can use the API from_tensor from the Dataset.

prediction_input = { 'crim': [0.03359,5.09017,0.12650,0.05515,8.15174,0.24522], 'zn': [75.0,0.0,25.0,33.0,0.0,0.0], 'indus': [2.95,18.10,5.13,2.18,18.10,9.90], 'nox': [0.428,0.713,0.453,0.472,0.700,0.544], 'rm': [7.024,6.297,6.762,7.236,5.390,5.782], 'age': [15.8,91.8,43.4,41.1,98.9,71.7], 'dis': [5.4011,2.3682,7.9809,4.0220,1.7281,4.0317], 'tax': [252,666,284,222,666,304], 'ptratio': [18.3,20.2,19.7,18.4,20.2,18.4] } def test_input_fn(): dataset = tf.data.Dataset.from_tensors(prediction_input) return dataset # Predict all our prediction_inputpred_results = model.predict(input_fn=test_input_fn)

Finaly, you print the predictions.

for pred in enumerate(pred_results): print(pred) Output INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train3/model.ckpt-1000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. (0, {'predictions': array([32.297546], dtype=float32)}) (1, {'predictions': array([18.96125], dtype=float32)}) (2, {'predictions': array([27.270979], dtype=float32)}) (3, {'predictions': array([29.299236], dtype=float32)}) (4, {'predictions': array([16.436684], dtype=float32)}) (5, {'predictions': array([21.460876], dtype=float32)}) INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from train3/model.ckpt-5000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. (0, {'predictions': array([35.60663], dtype=float32)}) (1, {'predictions': array([22.298521], dtype=float32)}) (2, {'predictions': array([25.74533], dtype=float32)}) (3, {'predictions': array([35.126694], dtype=float32)}) (4, {'predictions': array([17.94416], dtype=float32)}) (5, {'predictions': array([22.606628], dtype=float32)}) Summary

To train a model, you need to:

Define the features: Independent variables: X

Define the label: Dependent variable: y

Construct a train/test set

Define the initial weight

Define the loss function: MSE

Optimize the model: Gradient descent

Define:

Learning rate

Number of epoch

Batch size

In this tutorial, you learned how to use the high level API for a linear regression TensorFlow estimator. You need to define:

Feature columns. If continuous: tf.feature_column.numeric_column(). You can populate a list with python list comprehension

The estimator: tf.estimator.LinearRegressor(feature_columns, model_dir)

A function to import the data, the batch size and epoch: input_fn()

After that, you are ready to train, evaluate and make prediction with train(), evaluate() and predict()

You're reading Linear Regression Tutorial With Tensorflow

Autogpt Tutorial: Automate Coding Tasks With Ai

AutoGPT is an AI tool that automates coding tasks using GPT. Install Python and Pip, add API keys, and install AutoGPT to get started. Tutorials are available online for Windows, macOS, and Linux.

Are you tired of spending countless hours on repetitive coding tasks? Have you ever wondered if there was a way to automate these tasks and streamline your workflow? Look no further than AutoGPT, an open-source application that uses the GPT-4 language model to perform complex coding tasks and achieve goals with minimal human input. In this tutorial, we will guide you through the installation process and demonstrate how AutoGPT can revolutionize the way you approach coding.

AutoGPT is an Xcode Source Editor extension that enhances productivity by leveraging the capabilities of GPT-4, an AI language model developed by OpenAI. It is an experimental, open-source Python application that uses GPT-4 to act autonomously. AutoGPT is essentially an AI tool that creates different AI agents to meet specific coding tasks. It defines an agent that communicates with OpenAI’s API, and this agent’s objective is to carry out a variety of commands that can automate coding tasks. AutoGPT uses GPT’s superior text creation as part of its interesting breakdown of the AI’s phases. The tool can be set up on a local computer, and users can define the role of the AI and set goals for it to achieve. AutoGPT is an autonomous version of GPT-4 that can think and do things itself.

Also Check: What is Auto-GPT, and why does it matter?

There is no specific information available on the system requirements for AutoGPT. However, it is mentioned that Python is the only requirement for AutoGPT.

To install AutoGPT, follow these simple steps:

Download the ZIP file from Github.

Extract the ZIP file and copy the “Auto-GPT” folder.

Open the command prompt and navigate to the folder location.

Run the command “pip install -r requirements.txt” to install all the required libraries to run AutoGPT.

Finally, run the command “python -m autogpt” to start AutoGPT on your system.

On the first run, AutoGPT will ask you to name the AI and define its role. You can also set goals for the AI to achieve. Once you have completed the setup, AutoGPT will use the GPT-4 language model to perform tasks and achieve goals.

See Also: Auto-GPT vs AgentGPT: Understanding the Differences

It can perform a task with little human intervention and can self-prompt.

AutoGPT’s reasoning ability is similar to that of humans, making it a highly capable AI model.

It can complete tasks that you know nothing about, making it very versatile.

AutoGPT can interact with both online and local apps, software, and services like web browsers and word processors.

However, AutoGPT’s practicality may be limited due to the expensive GPT-4 model it uses.

The cost per task completion can be high, even for small tasks.

AutoGPT stands out from other AI tools because of the following reasons:

Independent operation: Unlike other AI tools, AutoGPT operates independently, which means that you no longer have to direct or steer the model to meet your needs. Instead, you can write your objectives, and the AI will do the rest for you.

Interact with apps and services: AutoGPT can interact with apps, software, and services both online and locally, like web browsers and word processors. This feature allows users to automate complex tasks that previously required human intervention.

Open-source project: AutoGPT is an open-source project that anyone can contribute to or use for their own purposes. This accessibility makes it easier for developers to use AutoGPT in their own projects and to improve the technology.

Breakthrough technology: AutoGPT is a breakthrough technology that creates its own prompts and enables large language models to perform complex multi-step tasks. Its unique abilities make it a valuable tool for a wide range of industries, including marketing, customer service, and content creation.

More Also: How to Use AgentGPT and AutoGPT

AutoGPT currently supports Python, JavaScript, and Swift, but the development team is constantly adding support for new languages.

Yes, you can modify the AI’s behavior by setting goals and defining its role during the setup process.

AutoGPT’s experimental nature poses challenges for users. The learning curve and potential issues require patience and persistence, as the developers work to refine and improve the software. However, the benefits of using AutoGPT make it worth the effort.

AutoGPT is a game-changer for coders who want to streamline their workflow and automate repetitive coding tasks. With its autonomous nature and use of GPT-4 language model, AutoGPT is a powerful tool that can save you time, reduce errors, and increase productivity. By following this tutorial, you can install and set up AutoGPT on your local computer and start enjoying the benefits of this tool.

Share this:

Twitter

Facebook

Like this:

Like

Loading…

Related

Sap Monitoring & Performance Checks: Complete Tutorial With Tcodes

What is System Monitoring?

System monitoring is a daily routine activity and this document provides a systematic step by step procedure for Server Monitoring. It gives an overview of technical aspects and concepts for proactive system monitoring. Few of them are:

Checking Application Servers.

Monitoring System-wide Work Processes.

Monitoring Work Processes for Individual Instances.

Monitoring Lock Entries.

CPU Utilization

Available Space in Database.

Monitoring Update Processes.

Monitoring System Log.

Buffer Statistics

Some others are:

Monitoring Batch Jobs

Spool Request Monitoring.

Number of Print Requests

ABAP Dump Analysis.

Database Performance Monitor.

Database Check.

Monitoring Application Users.

Why Daily Basic checks / System Monitoring?

How do we do monitor a SAP System? Checking Application Servers (SM51)

This transaction is used to check all active application servers.

Here you can see which services or work processes are configured in each instance.

Monitoring Work Processes for Individual Instances SM50:

Displays all running, waiting, stopped and PRIV processes related to a particular instance. Under this step we check all the processes; the process status should always be waiting or running. If any process is having a status other than waiting or running we need to check that particular process and report accordingly.

This transaction displays a lot of information like:

Status of Work process (whether it’s occupied or not)

If the work process is running, you may be able to see the action taken by it in the Action column.

You can which table is being worked upon

Some of the typical problems:

Some users may have PRIV status under Reason column. This could be that the user transaction is so big that it requires more memory. When this happen the DIA work process will be ‘owned’ by the user and will not let other users use. If this happens, check with the user and if possible run the job as a background job.

If there is a long print job on SPO work process, investigate the problem. It could be a problem related to the print server or printer.

Monitoring System-wide Work Processes (SM66)

By checking the work process load using the global work process overview, we can quickly investigate the potential cause of a system performance problem.

Monitor the work process load on all active instances across the system

Using the Global Work Process Overview screen, we can see at a glance:

The status of each application server

The reason why it is not running

Whether it has been restarted

The CPU and request run time

The user who has logged on and the client that they logged on to

The report that is running

Monitor Application User (AL08 and SM04)

This transaction displays all the users of active instances.

Monitoring Update Processes (SM13)

button.

If there are no long pending updates records or no updates are going on then this queue will be empty as shown in the below screen shot.

But, if the Update is not active then find the below information:

Is the update active, if not, was it deactivated by the system or by a user?

Is any update cancelled?

Is there a long queue of pending updates older than 10 minutes?

Monitoring Lock Entries (SM12)

Execute Transaction SM12 and put ‘*’ in the field User Name

SAP provides a locking mechanism to prevent other users from changing the record that you are working on. In some situations, locks are not released. This could happen if the users are cut off i.e. due to network problem before they are able to release the lock.

These old locks need to be cleared or it could prevent access or changes to the records.

We can use lock statistics to monitor the locks that are set in the system. We record only those lock entries which are having date time stamp of the previous day.

Monitoring System Log (SM21)

We can use the log to pinpoint and rectify errors occurring in the system and its environment.

We check the log for the previous day with the following selection/option:

Enter Date and time.

Select Radio Button Problems and Warnings

Press Reread System Log.

Tune Summary (ST02)

Step 1: Go to ST02 to check the Tune summary.

Step 4: Note down the value and the Profile parameters

Step 5: Go to RZ10 (to change the Profile parameter values)

Step 6: Save the changes.

Step 7: Restart the server to take the new changes effect.

CPU Utilization (ST06)

Idle CPU utilization rate must be 60-65%, if it exceeds the value then we must start checking at least below things:

Run OS level commands – top and check which processes are taking most resources.

Go to SM50 or SM66. Check for any long running jobs or any long update queries being run.

Go to SM12 and check lock entries

Go to SM13 and check Update active status.

Check for the errors in SM21.

ABAP Dumps (ST22)

Here we check for previous day’s dumps

Spool Request Monitoring (SP01)

For spool request monitoring, execute SP01 and select as below:

Put ‘*’ in the field Created By

Here we record only those requests which are terminated with problems.

Monitoring Batch Jobs (SM37)

For Monitoring background jobs, execute SM37 and select as below:

Put ‘*’ in the field User Name and Job name

In Job status, select: Scheduled, Cancelled, Released and Finished requests.

Transactional RFC Administration (SM58)

Transactional RFC (tRFC, also originally known as asynchronous RFC) is an asynchronous communication method which executes the called function module in the RFC server only once.

We need to select the display period for which we want to view the tRFCs and then select ‘*’ in the username field to view all the calls which have not be executed correctly or waiting in the queue.

QRFC Administration (Outbound Queue-SMQ1)

We should specify the client name over here and see if there any outgoing qRFCs in waiting or error state.

QRFC Administration (Inbound Queue-SMQ2)

We should specify the client name over here and see if there any incoming qRFCs in waiting or error state.

Database Administration (DB02)

After you select Current Sizes on the first screen we come to the below screen which shows us the current status of all the tablespaces in the system.

If any of the tablespaces is more than 95% and the auto extent is off then we need to add a new datafile so that the database is not full.

We can also determine the history of tablespaces.

We can select Months, Weeks or Days over here to see the changes which take place in a tablespace.

We can determine the growth of tablespace by analyzing these values.

Database Backup logs (DB12)

From this transaction, we could determine when the last successful backup of the system was. We can review the previous day’s backups and see if everything was fine or not.

We can also review the redo log files and see whether redo log backup was successful or not.

Quick Review

Daily Monitoring Tasks

Critical tasks

SAP System

Database

Critical tasks

No Task Transaction Procedure / Remark

1

Check that the R/3System is up.

Log onto the R/3 System

2

Check that daily backup executed without errors

DB12

Check database backup.

SAP System

No Task Transaction Procedure / Remark

1

Check that all application servers are up.

SM51

Check that all servers are up.

2

Check work processes (started from SM51).

SM50

All work processes with a “running” or a “waiting” status

3

Global Work Process overview

SM66

Check no work process is running more than 1800 second

3

Look for any failed updates (update terminates).

SM13

Set date to one day ago

Enter * in the user ID

Set to “all” updates Check for lines with “Err.”

4

Check system log.

SM21

Set date and time to before the last log review. Check for:

Errors

Warnings

Security messages

Database problems

5

Review for canceled jobs.

SM37

Enter an asterisk (*) in User ID.Verify that all critical jobs were successful.

6

Check for “old” locks.

SM12

Enter an asterisk (*) for the user ID.

7

Check for users on the system.

SM04AL08

Review for an unknown or different user ID and chúng tôi task should be done several times a day.

8

Check for spool problems.

SP01

Enter an asterisk (*) for Created ByLook for spool jobs that have been “In process” for over an hour.

9

Check job log

SM37

Check for:

New jobs

Incorrect jobs

10

Review and resolve dumps.

ST22

Look for an excessive number of dumps. Look for dumps of an unusual nature.

11

Review buffer statistics.

ST02

Look for swaps.

Database

No Task Transaction Procedure / Remark

1

Review error log for problems.

ST04

2

Database GrowthMissing Indexes

DB02

If tablespace is used more than 90 % add new data file to itRebuild the Missing Indexes

3

Database Statistics log

DB13

How To Use Tensorflow Normalize?

Introduction to TensorFlow normalize

Tensorflow normalize is the method available in the tensorflow library that helps to bring out the normalization process for tensors in neural networks. The main purpose of this process is to bring the transformation so that all the features work on the same or similar level of scale. Normalization plays a vital role in boosting the training stability as well as the performance of the model. The main techniques that are used internally for normalization include log scaling, scaling to a specified range, z score, and clipping.

Start Your Free Data Science Course

In this article, we will have a discussion over the points tensorflow normalize overviews, how to use tensorflow normalize, tensorflow Normalize features, tensorflow normalize examples, and finally concluding our statement.

Overview of TensorFlow normalize

Normalization is the process where we try to align all the parameters and features of the model on a similar scale so as to increase the overall performance and training quality for the model. The syntax of the normalized method is as shown below. Note that the normalize function works only for the data in the format of a numpy array.

Tensorflow.keras.utils.normalize(sample array, axis = -1, order = 2)

The arguments used in the above syntax are described in detail one by one here –

Sample array – It is the NumPy array data that is to be normalized.

Axis – This parameter helps to specify the axis along which we want the numpy array to normalize.

Order – This parameter specifies the order of normalization that we want to consider. The value can be a number such as here 2 stands for the L2 norm normalization.

Output value – The returned value of this method is a copy of the original numpy array which is completely normalized.

How to use TensorFlow normalize?

The steps that we need to follow while making the use of the normalizing method in our python tensorflow model are as described below –

Observe and try to explore the available data set which includes finding out the connection and correlation between the data.

The training data of the model should be normalized by using normalize method following its syntax mentioned above.

The steps for the linear model including building, compiling, and evaluation of the values should be carried out.

The DNN neural network should be built, compiled, and trained properly giving out the evaluation of values.

TensorFlow Normalize features

When you are trying to perform normalization, we will need to make the use of TensorFlow.estimator ad additionally we also need to mention the argument normalizer function inside the TensorFlow.feature_column in case if it’s a numeric feature then TensorFlow.feature_column.numeric_feature in case we want to supply the same parameters required while evaluating, training and serving.

The feature can be mentioned in below syntax –

Feature = tensorflow.feature_column.numeric_column (name of the feature, function of normalization = zscore)

The parameter that we pass as the function of normalizer here that is z score stands for the relation between the mean of the observations and the observations that are nothing but values.

The zscore function that we used is as shown below –

Return (x-calculatedMean)/ calculatedStandardDeviation

Let us consider one example where we will try to find the estimates by using normalization with numeric features. We will try to create standard deviation and mean values for which we have used above mentioned calculations. We will create the estimators by using the feature columns specified. Our code is as shown below –

‘number_of_people’, ‘cityPeopleCount’, ‘medianStipend’] normalization_parameters

The output of executing the above code gives the result as shown in the below image –

TensorFlow normalize examples

Let us consider one example. We will try to normalize all the data that is present in the range of -1 to 1. The TensorFlow 2.x library has come up with the experimental normalization engine which creates a way for the alternative of the manual normalization process.

The object of normalization helps us to provide the input of the set of data and also the method of NumPy which returns the matrix containing all the numbers that are normalized. This matrix can be used to prepare the pandas data frame by simply passing it which will help in plotting the histogram of the updated normalized data set. We can also observe that the data cluster around the zero value after normalization in the graph output.

dnnModel. summary()

Histogram output –

Conclusion

Tensorflow normalize method is used on NumPy array to make the available data set normalize. We will have to find out correlations and study the input data and then go for the manual normalization process or we can make use of the normalization technique of trial and error provided in version 2 of tensorflow.

Recommended Articles

This is a guide to TensorFlow normalize. Here we discuss the Introduction, overviews, How to use TensorFlow normalize? Examples with code implementation. You may also have a look at the following articles to learn more –

Introduction To Saliency Map In An Image With Tensorflow 2.X Api

This article was published as a part of the Data Science Blogathon.

Introduction to Tensorflow 2.x

Suppose we want to focus on a particular part of an image than what concept will help us. Yes, it is a saliency map. It mainly focuses on specific pixels of images while ignoring others.

Saliency Map

The saliency map of an image represents the most prominent and focused pixel of an image. Sometimes, the brighter pixels of an image tell us about the salient of the pixels. This means the brightness of the pixel is directly proportional to the saliency of an image.

Suppose we want to give attention to a particular part of an image, like wanting to focus on the image of a bird rather than the other parts like the sky, nest etc. Then by computing the saliency map, we will achieve this. It will crucially assist in reducing the computational cost. It is usually a grayscale image but can be converted into another format of a coloured image depending upon our visual comfortability. Saliency maps are also termed “heat maps” since the hotness/brightness of the image has an impactful effect on identifying the class of the object. The saliency map aims to determine the areas in the fovea that are salient or observable at every place and to influence the decision of attentive regions based on the spatial pattern of saliency. It is employed in a variety of Visual Attention models. The “ITTI and Koch” Computational Framework of Visual Attention is built on the notion of a saliency map.

How to Compute Saliency Map with Tensorflow?

The saliency map can be calculated by taking the derivatives of the class probability Pk with respect to the input image X.

saliency_map = dpk/dX

Hang on a minute! That seems quite familiar! Yes, it’s the same backpropagation which we use for training the model. We only need to take one more step: the gradient does not stop at the first layer of our network. Rather, we must return it to the input image X.

As a result, saliency maps provide a suitable characterization for each input pixel in accordance with a specific class prediction Pi. Pixels that are significant for flower prediction should cluster around flower pixels. Otherwise, something really strange is going on with the trained model.

Different Types of the Saliency Map

1. Static Saliency: It focuses on every static pixel of the image to calculate the important area of interest for saliency map analysis.

2. Motion Saliency: It focuses on the dynamic features of video data. The saliency map in a video is computed by calculating the optical flow of a video. The moving entities/objects are considered salient objects.

Code

We will perform step-by-step investigations of the ResNet50 architecture, which has been pre-trained on ImageNet. But you can take other pretrained deep learning models or bring your own trained model. We will illustrate how to develop a basic saliency map utilizing the most famous DL model in TensorFlow 2.x. We used the Wikimedia image as a test image during the tutorials.

We begin by creating a ResNet50 with ImageNet weights. With the simple helper functions, we import the image on the disc and prepare it for feeding to the ResNet50.

# Import necessary packages import tensorflow as tf import numpy as np import matplotlib.pyplot as plt def input_img(path): image = tf.image.decode_png(tf.io.read_file(path)) image = tf.expand_dims(image, axis=0) image = tf.cast(image, tf.float32) image = tf.image.resize(image, [224,224]) return image def normalize_image(img): def get_image(): import urllib.request filename = 'image.jpg' urll )

fig1: Input_image

test_model = tf.keras.applications.resnet50.ResNet50() #test_model.summary() get_image() img_path = "image.jpg" input_img = input_img(img_path) input_img = tf.keras.applications.densenet.preprocess_input(input_img) plt.imshow(normalize_image(input_img[0]), cmap = "ocean") result = test_model(input_img) max_idx = tf.argmax(result,axis = 1) tf.keras.applications.imagenet_utils.decode_predictions(result.numpy())

with tf.GradientTape() as tape: tape.watch(input_img) result = test_model(input_img) max_score = result[0,max_idx[0]]

fig2: (1) Saliency_map, (2) input_image, (3) overlayed_image

Con

In this blog, we have defined the saliency map in different aspects. We have added a pictorial representation to understand the term “saliency map” deeply. Also, we have understood it by implementing it in python using TensorFlow API. The results seem promising and can be easily understandable.

In this article, you have learned about the saliency map for an image with tensorflow.

A python code is also implemented on an image t

The mathematical background of the saliency map is also covered in this article.

That’s it! Congratulations, you just computed a saliency map.

github_repo.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

How To Perform Regression Testing?

In this article, we will learn what is regression testing, how to do it, when it is required and its types.

What is Regression Testing?

Regression testing is also known as black box testing. It is used to verify that a software code modification does not affect the product’s existing functionality. Regression testing ensures that a product’s new functionality, issue patches, or other changes to an existing feature work properly

Regression testing is a sort of software analysis. Test cases are re-run to ensure that the application’s previous functionality is still operational and that the new changes haven’t introduced any defects.

When there is a significant change in the original functionality, regression testing can be performed on a new build. It ensures that the code continues to function even though modifications are made. Regression testing refers to re-testing the elements of the application that haven’t changed.

The Verification Method is another name for regression tests. Many test cases are automated. Test cases must be run multiple times, and manually repeating the same test case over and over is time-consuming and monotonous.

How to do Regression Testing?

When software maintenance includes upgrades, error fixes, optimization, and deletion of existing functionalities, regression testing is required. These changes may have an impact on the system’s functionality. In this instance, regression testing is required.

The following strategies can be used to perform regression testing −

Re-examine All

One method for performing regression testing is to use Re-Test. All test case suits should be re-executed in this method. We can define re-testing as when a test fails and the cause of the failure is determined to be a software flaw. After the problem has been reported, we can anticipate a new version of the software with the flaw rectified. In this instance, we’ll need to run the test again to ensure that the problem has been resolved. This is referred to as re-testing. This is referred to as confirmation testing by some.

The re-test is quite costly, as it necessitates a significant amount of time and resources.

Regression analysis Selection

Instead of executing a full test-case suit, this technique executes a single test-case suit.

The chosen test case is split into two parts: (a) Test cases that can be reused and (b) Test cases that are no longer valid.

Test cases that are reusable can be used in subsequent regression cycles.

Test cases that are no longer valid cannot be used in subsequent regression cycles.

Test case prioritization

Prioritize the test case based on the business effect, important functionality, and frequency of use. The regression test suite will be reduced by selecting test cases.

Cases When Regression testing should be done

In the following case, regression testing can be performed −

When a Change Requirement exists

Example − Remember password has been deleted from the login page, which was previously relevant.

When the application receives new functionality.

Example − A website offers a login feature that only allows users to log in using their email addresses. We’ve added a new option that allows you to log in using Facebook.

When the flaw has been corrected

Example − Consider the following scenario: a login button on a login page isn’t working, and a tester reports a bug indicating that the login button is broken. Once the bug has been repaired by the developers, the tester verifies that the Login Button is functioning as expected. On the other hand, Testers also test the other functions correlated to login button.

If there is a performance problem to fix

Example − A homepage takes 5 seconds to visit; by reducing the load time to 2 seconds, the page will load in half the time.

When an environment change is there

Example − When the database is upgraded from MySQL to Oracle.

How do you decide which test cases to use for regression testing?

It was discovered during an inspection of the industry. The customer reported multiple faults that were caused by last-minute bug patches. As a result of these side effects, picking a Test Case for regression testing is an art, not a simple operation.

Regression test can be done by −

A test case which has frequent defects

Functionalities which are more visible to users.

Test cases checks the main features of the product.

All integration test cases

All complex test cases

Boundary value test cases

A sample of successful test cases

Failure of test cases

Regression Testing Types

The following are the various types of regression testing −

URT stands for Unit Regression Testing.

RRT stands for Regional Regression Testing.

FRT stands for “Full or Complete Regression Testing.”

Unit Regression Testing (URT)

We’ll only test the altered unit in this case, not the impact area, because it could influence other modules’ components.

Example

The developer creates the Search button in an application, which allows 1-15 characters in the first build. The test engineer then uses the test case design technique to test the Search button.

The customer now requires that the Search button support 1-35 characters, as well as some other changes to the need. The test engineer will just test the Search button to ensure that it accepts 1-35 characters and will not examine any other aspects of the first build.

Regional Regression Testing (RRT)

This is known as Regional Regression Evaluating, and it involves testing the modification together with the effect area or areas. We’re testing the impact area because if there are dependable modules, they’ll have an impact on the other modules as well.

Consider the following scenario:

We can assume that there are four separate modules in the image in our mind which includes Module A, Module B, Module C, and Module D, which are provided by the developers for testing during the first build. Now it’s up to the test engineer to find the faults in Module D. The developers get the bug report, and the development team remedies the defects then after sends the second build.

The preceding flaws are fixed in the second build. The test engineer now realizes that the problem fixes in Module D have had an influence on some functionality in Modules A and C. As a result, the test engineer checks the impact areas in Module A and Module C after testing Module D, where the bug has been resolved. As a result, this type of testing is referred to as Regional regression testing.

Full Regression Testing (FRT)

During the product’s second and third releases, the client requests the addition of three to four new features, as well as the correction of some bugs from the previous release. The testing team will next do an impact analysis and determine that the above change will require us to test the complete product.

As a result, we may refer to testing the updated features as well as all of the remaining (old) features as Full Regression testing.

When the following conditions are met, we will perform the FRT −

When the alteration is made to the product’s source file. JVM, for example, is the JAVA application’s root file, and any changes to JVM will cause the entire JAVA program to be tested.

When we have a large number of modifications to make.

Update the detailed information about Linear Regression Tutorial With Tensorflow on the Eastwest.edu.vn website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!