Trending February 2024 # How To Program Nfc Tags Using Android # Suggested March 2024 # Top 10 Popular

You are reading the article How To Program Nfc Tags Using Android updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 How To Program Nfc Tags Using Android

NFC stands for Near Field Communication and it allows two devices held closely to communicate with each other. An NFC tag is a paper-like tag that can be programmed to do your tasks using the NFC technology.

If you haven’t heard of this technology before, the above might sound a bit too technical to you, but it’s not. Once you’ve learned the basics of programming an NFC tag, you’ll find that you can use it to automate a number of your tasks that you may be doing manually every day.

Table of Contents

Getting an NFC tag and programming it doesn’t require any special skills. As long as you know how to use an app on your Android device, you can program an NFC tag to do your specified tasks. Also, these NFC tags are inexpensive and available on all the major websites including Amazon. You can get a few of these for you so they can perform various tasks for you.

Requirements For Programming An NFC Tag

In order to program NFC tags, there are certain things, or requirements that you must meet. These are basic ones and as long as you use modern gadgets, you should be just fine.

You must have:

An NFC tag which can be bought very cheaply on Amazon.

An Android device with NFC compatibility. Check your phone’s specifications to confirm.

An app to program your tags. There’s a free app on the Play Store so you don’t need to worry about it.

Once you’ve confirmed you meet the minimum requirements, head onto the following section to start writing data to your NFC tag.

Writing Data To An NFC Tag Using Your Android Device

Programming an NFC tag basically means writing the actions you want to perform to your tag. This is done using a free app from the Play Store that you can download and use on your device.

When NFC is enabled, launch the Google Play Store on your device, search for the app named Trigger, and install the app on your device.

Launch the newly installed app. When it opens, you’ll need to first create a new trigger. This can be done by tapping on the + (plus) sign at the bottom-right corner.

On the following screen, you’ll find the options you can create triggers for. The option you need to tap on is called NFC as this is what allows you to perform an action when an NFC tag is tapped.

After tapping NFC, tap on Next on the following screen to continue to program your tag.

The screen that follows lets you add restrictions to your tag. Here you can define the conditions when your tag is allowed to run. Tap on Done when you’ve specified the options.

Your NFC trigger is now ready. You now need to add an action to it so that your tag performs your chosen action when it’s tapped. Tap on Next to do it.

You’ll find various actions you can add to your tag for it to perform. As an example, we’ll be using the Bluetooth toggle option so that Bluetooth is turned on/off when the tag is tapped. Hit Next when you’re done.

You can customize the action even further on the following screen. Since we want to toggle Bluetooth, we’ll choose Toggle from the dropdown menu and tap on Add to Task.

You can now see all the actions you’ve added to the list. If you want, you can more actions by tapping the + (plus) sign at the top. This’ll make your tag do more than one task at a time. Then tap on Next to continue.

Tap on Done on the following screen.

Here comes the main part where you actually write the data to your tag. Place your NFC tag near the NFC location (usually near the rear camera) and the app will automatically write your actions to your tag.

You’ll get a success message when the tag is successfully programmed.

From now on, whenever you tap your phone to your NFC tag, it’ll perform the predefined actions on your device. In our case above, it’ll toggle the Bluetooth functionality on our phone.

You can even stick these tags somewhere convenient and then all you need to do is tap your phone at them to run your tasks.

How To Erase An NFC Tag On Android

If you want to use your tag for any other task, you can do so by erasing the existing data on it. You can program NFC tags as many times as you want and it’s pretty easy to get them formatted if you wish to do it.

Enable the NFC option on your device and launch the Trigger app.

Tap on the three horizontal-lines at the top-left corner and select Other NFC Actions.

On the following screen, you’ll find an option that says Erase tag. Tap on it to select it.

Place your NFC tag on your phone like you did when you were programming it.

You’ll get a notification when your tag is erased. It’s instant in most cases.

Uses Of a Programmable NFC Tag

If this is your first time using NFC tags, we know you’ll appreciate some suggestions as to what to use them for

Create a WiFi NFC tag that lets your guests automatically connect to your WiFi.

Create an NFC tag for an alarm so you don’t need to mess with the alarm app.

Make a tag for your conference room that puts people’s devices in silent mode.

Program a tag to call someone specific in your contacts

You're reading How To Program Nfc Tags Using Android

Topic Modeling: Predicting Multiple Tags Of Research Articles Using Onevsrest Strategy

This article was published as a part of the Data Science Blogathon

Recently I participated in an NLP hackathon — “Topic Modeling for Research Articles 2.0”. This hackathon was hosted by the Analytics Vidhya platform as a part of their HackLive initiative. The participants were guided by experts in a 2-hour live session and later on were given a week to compete and climb the leaderboard.

Problem Statement

Given the abstracts for a set of research articles, the task is to predict the tags for each article included in the test set.

The research article abstracts are sourced from the following 4 topics — Computer Science, Mathematics, Physics, Statistics. Each article can possibly have multiple tags among 25 tags like Number Theory, Applications, Artificial Intelligence, Astrophysics of Galaxies, Information Theory, Materials Science, Machine Learning et al. Submissions are evaluated on micro F1 Score between the predicted and observed tags for each article in the test set.

Complete Problem Statement and the dataset is available here.

Without further ado let’s get started with the code.

Loading and Exploring data

Importing necessary libraries —

%matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from nltk.tokenize import word_tokenize from chúng tôi import PorterStemmer from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn import metrics from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score

Load train and test data from .csv files into Pandas DataFrame —

train_data = pd.read_csv(‘Train.csv’) test_data = pd.read_csv(‘Test.csv’)

Train and Test data shape —

print(“Train size:”, train_data.shape) print(“Test size:”, test_data.shape)


There are ~ 14k datapoints in the Train dataset and ~6k datapoints in the Test set. Overview of train and test datasets —



As we can see from the train data info, there are 31 columns — 1 column for id, 1 column for Abstract text, 4 columns for topics, these all form our feature variables, and the next 25 columns are class-labels that we have to ‘learn’ for the prediction task.

topic_cols = [‘Computer Science’, ‘Mathematics’, ‘Physics’, ‘Statistics’] target_cols = [‘Analysis of PDEs’, ‘Applications’, ‘Artificial Intelligence’, ‘Astrophysics of Galaxies’, ‘Computation and Language’, ‘Computer Vision and Pattern Recognition’, ‘Cosmology and Nongalactic Astrophysics’, ‘Data Structures and Algorithms’, ‘Differential Geometry’, ‘Earth and Planetary Astrophysics’, ‘Fluid Dynamics’, ‘Information Theory’, ‘Instrumentation and Methods for Astrophysics’, ‘Machine Learning’, ‘Materials Science’, ‘Methodology’, ‘Number Theory’, ‘Optimization and Control’, ‘Representation Theory’, ‘Robotics’, ‘Social and Information Networks’, ‘Statistics Theory’, ‘Strongly Correlated Electrons’, ‘Superconductivity’, ‘Systems and Control’]

How many datapoints have more than 1 tags?

my_list = [] for i in range(train_data.shape[0]): my_list.append(sum(train_data.iloc[i, 6:])) pd.Series(my_list).value_counts()


So, most of our research articles have either 1 or 2 tags.

Data cleaning and preprocessing for OneVsRest Classifier

Before proceeding with data cleaning and pre-processing, it’s a good idea to first print and observe some random samples from training data in order to get an overview. Based on my observation I built the below pipeline for cleaning and pre-processing the text data:

De-contraction → Removing special chars → Removing stopwords →Stemming

First, we define some helper functions needed for text processing.

De-contracting the English phrases —

def decontracted(phrase): #specific phrase = re.sub(r”won’t”, “will not”, phrase) phrase = re.sub(r”can’t”, “cannot”, phrase) # general phrase = re.sub(r”n’t”, “ not”, phrase) phrase = re.sub(r”’re”, “ are”, phrase) phrase = re.sub(r”’s”, “ is”, phrase) phrase = re.sub(r”’d”, “ would”, phrase) phrase = re.sub(r”’ll”, “ will”, phrase) phrase = re.sub(r”’t”, “ not”, phrase) phrase = re.sub(r”’ve”, “ have”, phrase) phrase = re.sub(r”’m”, “ am”, phrase) phrase = re.sub(r”’em”, “ them”, phrase) return phrase

(I prefer my own custom set of stopwords to the in-built ones. It helps to me readily modify the stopwords set depending on the problem)

stopwords = [‘i’, ‘me’, ‘my’, ‘myself’, ‘we’, ‘our’, ‘ours’, ‘ourselves’, ‘you’, “you’re”, “you’ve”, “you’ll”, “you’d”, ‘your’, ‘yours’, ‘yourself’, ‘yourselves’, ‘he’, ‘him’, ‘his’, ‘himself’, ‘she’, “she’s”, ‘her’, ‘hers’, ‘herself’, ‘it’, “it’s”, ‘its’, ‘itself’, ‘they’, ‘them’, ‘their’, ‘theirs’, ‘themselves’, ‘what’, ‘which’, ‘who’, ‘whom’, ‘this’, ‘that’, “that’ll”, ‘these’, ‘those’, ‘am’, ‘is’, ‘are’, ‘was’, ‘were’, ‘be’, ‘been’, ‘being’, ‘have’, ‘has’, ‘had’, ‘having’, ‘do’, ‘does’, ‘did’, ‘doing’, ‘a’, ‘an’, ‘the’, ‘and’, ‘but’, ‘if’, ‘or’, ‘because’, ‘as’, ‘until’, ‘while’, ‘of’, ‘at’, ‘by’, ‘for’, ‘with’, ‘about’, ‘against’, ‘between’, ‘into’, ‘through’, ‘during’, ‘before’, ‘after’, ‘above’, ‘below’, ‘to’, ‘from’, ‘up’, ‘down’, ‘in’, ‘out’, ‘on’, ‘off’, ‘over’, ‘under’, ‘again’, ‘further’, ‘then’, ‘once’, ‘here’, ‘there’, ‘when’, ‘where’, ‘why’, ‘how’, ‘all’, ‘any’, ‘both’, ‘each’, ‘few’, ‘more’, ‘most’, ‘other’, ‘some’, ‘such’, ‘only’, ‘own’, ‘same’, ‘so’, ‘than’, ‘too’, ‘very’, ‘s’, ‘t’, ‘can’, ‘will’, ‘just’, ‘don’, “don’t”, ‘should’, “should’ve”, ‘now’, ‘d’, ‘ll’, ‘m’, ‘o’, ‘re’, ‘ve’, ‘y’, ‘ain’, ‘aren’, “aren’t”, ‘couldn’, “couldn’t”, ‘didn’, “didn’t”, ‘doesn’, “doesn’t”, ‘hadn’, “hadn’t”, ‘hasn’, “hasn’t”, ‘haven’, “haven’t”, ‘isn’, “isn’t”, ‘ma’, ‘mightn’, “mightn’t”, ‘mustn’, “mustn’t”, ‘needn’, “needn’t”, ‘shan’, “shan’t”, ‘shouldn’, “shouldn’t”, ‘wasn’, “wasn’t”, ‘weren’, “weren’t”, ‘won’, “won’t”, ‘wouldn’, “wouldn’t”]

Alternatively, you can directly import stopwords from word cloud API —

from wordcloud import WordCloud, STOPWORDS stopwords = set(list(STOPWORDS))

Stemming using Porter stemmer —

def stemming(sentence): token_words = word_tokenize(sentence) stem_sentence = [] for word in token_words: stemmer = PorterStemmer() stem_sentence.append(stemmer.stem(word)) stem_sentence.append(“ “) return “”.join(stem_sentence)

Now that we’ve defined all the functions, let’s write a text pre-processing pipeline —

def text_preprocessing(text): preprocessed_abstract = [] for sentence in text: sent = decontracted(sentence) sent = re.sub(‘[^A-Za-z0–9]+’, ‘ ‘, sent) sent = ‘ ‘.join(e.lower() for e in sent.split() if e.lower() not in stopwords) sent = stemming(sent) preprocessed_abstract.append(sent.strip()) return preprocessed_abstract

Preprocessing the train data abstract text—

train_data[‘preprocessed_abstract’] = text_preprocessing(train_data[‘ABSTRACT’].values) train_data[[‘ABSTRACT’, ‘preprocessed_abstract’]].head()


Likewise, preprocessing the test dataset –

test_data[‘preprocessed_abstract’] = text_preprocessing(test_data[‘ABSTRACT’].values) test_data[[‘ABSTRACT’, ‘preprocessed_abstract’]].head()


Now we longer need the original ‘ABSTRACT’ column. You may drop this column from the datasets.

Text data encoding

Splitting train data into train and validation datasets —

X = train_data[[‘Computer Science’, ‘Mathematics’, ‘Physics’, ‘Statistics’, ‘preprocessed_abstract’]] y = train_data[target_cols] from sklearn.model_selection import train_test_split X_train, X_cv, y_train, y_cv = train_test_split(X, y, test_size = 0.25, random_state = 21) print(X_train.shape, y_train.shape) print(X_cv.shape, y_cv.shape)


As we can see, we have got ~ 10500 datapoints in our training set and ~3500 datapoints in the validation set.

TF-IDF vectorization of text data

Building vocabulary —

combined_vocab = list(train_data[‘preprocessed_abstract’]) + list(test_data[‘preprocessed_abstract’])

Yes, here I’ve knowingly committed a sin! I have used the complete train and test data for building vocabulary to train a model on it. Ideally, your model shouldn’t be seeing the test data.

vectorizer = TfidfVectorizer(min_df = 5, max_df = 0.5, sublinear_tf = True, ngram_range = (1, 1)) X_train_tfidf = vectorizer.transform(X_train[‘preprocessed_abstract’]) X_cv_tfidf = vectorizer.transform(X_cv[‘preprocessed_abstract’]) print(X_train_tfidf.shape, y_train.shape) print(X_cv_tfidf.shape, y_cv.shape)


After TF-IDF encoding we obtain 9136 features, each of them corresponding to a distinct word in the vocabulary.

Some important things you should know here —

I didn’t directly jump to a conclusion that I should go with TF-IDF vectorization. I tried different methods like BOW, W2V using a pre-trained GloVe model, etc. Among them, TF-IDF turned out to be the best performing so here I’m demonstrating only this.

It didn’t magically appear to me that I should be going with uni-grams. I tried bi-grams, tri-grams, and even four-grams; the model employing the unigrams gave the best performance among all.

Text data encoding is a tricky thing. Especially in competitions where even a difference of 0.001 in the performance metric can push you several places behind on the leaderboard. So, one should be open to trying different permutations & combinations at a rudimentary stage.

Before we proceed with modeling, we stack all the features(topic features + TF-IDF encoded text features) together for both train and test datasets respectively.

from scipy.sparse import hstack X_train_data_tfidf = hstack((X_train[topic_cols], X_train_tfidf)) X_cv_data_tfidf = hstack((X_cv[topic_cols], X_cv_tfidf)) Multi-label classification using OneVsRest Classifier

Until now we were only dealing with refining and vectorizing the feature variables. As we know, this is a multi-label classification problem and each document may have one or more predefined tags simultaneously. We already saw that several datapoints have 2 or 3 tags.

Most traditional machine learning algorithms are developed for single-label classification problems. Therefore a lot of approaches in the literature transform the multi-label problem into multiple single-label problems so that the existing single-label algorithms can be used.

(‘C’ denotes inverse of regularization strength. Smaller values specify stronger regularization).

from sklearn.multiclass import OneVsRestClassifier from sklearn.linear_model import LogisticRegression C_range = [0.01, 0.1, 1, 10, 100] for i in C_range: clf = OneVsRestClassifier(LogisticRegression(C = i, solver = ‘sag’)), y_train) y_pred_train = clf.predict(X_train_data_tfidf) y_pred_cv = clf.predict(X_cv_data_tfidf) f1_score_train = f1_score(y_train, y_pred_train, average = ‘micro’) f1_score_cv = f1_score(y_cv, y_pred_cv, average = ‘micro’) print(“C:”, i, “Train Score:”,f1_score_train, “CV Score:”, f1_score_cv) print(“- “*50)


We can see that the highest validation score is obtained at C = 10. But the training score here is also very high, which was kind of expected.

Let’s tune the hyper-parameter even further —

from sklearn.multiclass import OneVsRestClassifier from sklearn.linear_model import LogisticRegressionC_range = [10, 20, 40, 70, 100] for i in C_range: clf = OneVsRestClassifier(LogisticRegression(C = i, solver = ‘sag’)), y_train) y_pred_train = clf.predict(X_train_data_tfidf) y_pred_cv = clf.predict(X_cv_data_tfidf) f1_score_train = f1_score(y_train, y_pred_train, average = ‘micro’) f1_score_cv = f1_score(y_cv, y_pred_cv, average = ‘micro’) print(“C:”, i, “Train Score:”,f1_score_train, “CV Score:”, f1_score_cv) print(“- “*50)


The model with C = 20 gives the best score on the validation set. So, going further, we take C = 20.

If you notice, here we have used the default L2 penalty for regularization as the model with L2 gave me the best result among L1, L2, and elastic-net mixing.

Determining the right thresholds for OneVsRest Classifier

The default threshold in binary classification algorithms is 0.5. But this may not be the best threshold given the data and the performance metrics that we intend to maximize. As we know, the F1 score is given by —

A good threshold(for each distinct label) would be the one that maximizes the F1 score.

def get_best_thresholds(true, pred): thresholds = [i/100 for i in range(100)] best_thresholds = [] for idx in range(25): best_thresh = thresholds[np.argmax(f1_scores)] best_thresholds.append(best_thresh) return best_thresholds

In a nutshell, what the above function does is, for each of the 25 class labels, it computes the F1 scores corresponding to each of the hundred thresholds and then selects that threshold which returns the maximum F1 score for the given class label.

If the individual F1 score is high, the micro-average F1 will also be high. Let’s get the thresholds —

clf = OneVsRestClassifier(LogisticRegression(C = 20, solver = ‘sag’)), y_train) y_pred_train_proba = clf.predict_proba(X_train_data_tfidf) y_pred_cv_proba = clf.predict_proba(X_cv_data_tfidf) best_thresholds = get_best_thresholds(y_cv.values, y_pred_cv_proba) print(best_thresholds)


[0.45, 0.28, 0.19, 0.46, 0.24, 0.24, 0.24, 0.28, 0.22, 0.2, 0.22, 0.24, 0.24, 0.41, 0.32, 0.15, 0.21, 0.33, 0.33, 0.29, 0.16, 0.66, 0.33, 0.36, 0.4]

As you can see we have obtained a distinct threshold value for each class label. We’re going to use these same values in our final OneVsRest Classifier model. Making predictions using the above thresholds —

y_pred_cv = np.empty_like(y_pred_cv_proba)for i, thresh in enumerate(best_thresholds): print(f1_score(y_cv, y_pred_cv, average = ‘micro’))



Thus, we have managed to obtain a significantly better score using the variable thresholds.

So far we have performed hyper-parameter tuning on the validation set and managed to obtain the optimal hyperparameter (C = 20). Also, we tweaked the thresholds and obtained the right set of thresholds for which the F1 score is maximum.

Making a prediction on the test data using OneVsRest Classifier

Using the above parameters let’s move on to build train a full-fledged model on the entire training data and make a prediction on the test data.

# train and test data X_tr = train_data[[‘Computer Science’, ‘Mathematics’, ‘Physics’, ‘Statistics’, ‘preprocessed_abstract’]] y_tr = train_data[target_cols] X_te = test_data[[‘Computer Science’, ‘Mathematics’, ‘Physics’, ‘Statistics’, ‘preprocessed_abstract’]] # text data encoding X_tr_tfidf = vectorizer.transform(X_tr['preprocessed_abstract']) X_te_tfidf = vectorizer.transform(X_te['preprocessed_abstract']) # stacking X_tr_data_tfidf = hstack((X_tr[topic_cols], X_tr_tfidf)) X_te_data_tfidf = hstack((X_te[topic_cols], X_te_tfidf)) # modeling and making prediction with best thresholds clf = OneVsRestClassifier(LogisticRegression(C = 20)), y_tr) y_pred_tr_proba = clf.predict_proba(X_tr_data_tfidf) y_pred_te_proba = clf.predict_proba(X_te_data_tfidf) y_pred_te = np.empty_like(y_pred_te_proba) for i, thresh in enumerate(best_thresholds):

Once we obtain our test predictions, we attach them to the respective ids (as in the sample submission file) and make a submission in the designated format.

ss = pd.read_csv(‘SampleSubmission.csv’) ss[target_cols] = y_pred_te ss.to_csv(‘LR_tfidf10k_L2_C20.csv’, index = False)

The best thing about participating in the hackathons is that you get to experiment with different techniques, so when you encounter similar kind of problem in the future you have a fair understanding of what works and what doesn’t. And also you get to learn a lot from other participants by actively participating in the discussions.

You can find the complete code here on my GitHub profile.

About the Author


How To Update The Contents Of A Resultset Using A Jdbc Program?

To update the contents of the ResultSet you need to create a statement by passing the ResultSet type updatable, as:

Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, | Id | Name | Salary | Location | | 1 | Amit | 3000 | Hyderabad | | 2 | Kalyan | 4000 | Vishakhapatnam | | 3 | Renuka | 6000 | Delhi | | 4 | Archana | 9000 | Mumbai | | 5 | Sumith | 11000 | Hyderabad | public class ResultSetExample {    public static void main(String[] args) throws Exception {             DriverManager.registerDriver(new com.mysql.jdbc.Driver());             String mysqlUrl = "jdbc:mysql://localhost/TestDB";       Connection con = DriverManager.getConnection(mysqlUrl, "root", "password");       System.out.println("Connection established......");             Statement stmt = con.createStatement(       ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE);             ResultSet rs = stmt.executeQuery("select * from Employees");             System.out.println("Contents of the table: ");       printRs(rs);             rs.beforeFirst();             while({          //Retrieve by column name          int newSal = rs.getInt("Salary") + 5000;          rs.updateInt( "Salary", newSal );          rs.updateRow();       }       System.out.println("Contents of the ResultSet after increasing salaries");       printRs(rs);             rs.beforeFirst();       rs.absolute(2);       System.out.println("Record we need to delete: ");       System.out.print("ID: " + rs.getInt("id"));       System.out.print(", Salary: " + rs.getInt("Salary"));       System.out.print(", Name: " + rs.getString("Name"));       System.out.println(", Location: " + rs.getString("Location"));       System.out.println(" ");             rs.deleteRow();       System.out.println("Contents of the ResultSet after deleting one records...");       printRs(rs);       System.out.println("Goodbye!");    }    public static void printRs(ResultSet rs) throws SQLException{             rs.beforeFirst();       while({          System.out.print("ID: " + rs.getInt("id"));          System.out.print(", Salary: " + rs.getInt("Salary"));          System.out.print(", Name: " + rs.getString("Name"));          System.out.println(", Location: " + rs.getString("Location"));       }       System.out.println();    } } Output Connection established...... Contents of the table: ID: 1, Salary: 3000, Name: Amit, Location: Hyderabad ID: 2, Salary: 4000, Name: Kalyan, Location: Vishakhapatnam ID: 3, Salary: 6000, Name: Renuka, Location: Delhi ID: 4, Salary: 9000, Name: Archana, Location: Mumbai ID: 5, Salary: 11000, Name: Sumith, Location: Hyderabad Conetnets of the resultset after increaing salaries ID: 1, Salary: 8000, Name: Amit, Location: Hyderabad ID: 2, Salary: 9000, Name: Kalyan, Location: Vishakhapatnam ID: 3, Salary: 11000, Name: Renuka, Location: Delhi ID: 4, Salary: 14000, Name: Archana, Location: Mumbai ID: 5, Salary: 16000, Name: Sumith, Location: Hyderabad Record we need to delete: ID: 2, Salary: 9000, Name: Kalyan, Location: Vishakhapatnam Contents of the resultset after deleting one records... ID: 1, Salary: 8000, Name: Amit, Location: Hyderabad ID: 3, Salary: 11000, Name: Renuka, Location: Delhi ID: 4, Salary: 14000, Name: Archana, Location: Mumbai ID: 5, Salary: 16000, Name: Sumith, Location: Hyderabad Goodbye!

How To Continue Using Call Recording Apps On Android Pie And Q

Android Pie went official last year and brought with it a laundry list of new features such as a new Material Theme UI for many apps, gesture navigation, Digital Wellbeing, and more. All these features are great and will indeed come in handy in daily use, but there’s one important functionality that has been removed from Android Pie i.e phone call recording.

Rooted apps still work, but if you are wary of voiding the warranty by rooting your phone, you are not alone. So here are a few nifty workarounds to overcome the Android P and now Android Q limitation on call recording.

A Brief Overview

Before I explain the workarounds, I would highly suggest you check whether your smartphone has a native call recording option. As far as I know, OnePlus comes with a native call recorder and you can record calls right from the dialer app without having to set up anything. It even works on Android 9 and 10.

As an aside, Google recently hinted that they are bringing call recording functionality on their stock dialer app as spotted by XDA on Pixel 4. What it means is that, in the future, all smartphones with Stock Android will have built-in call recording capability once the feature rolls out. Also, Xiaomi might ship Stock dialer on MIUI in the EU region so users from that particular region can enjoy native call recording as well. But again, there might be limitations in certain regions so keep that in mind.

Now that we have gone through the basics, let’s find out how we can record calls on Android Pie and Q devices that don’t have a built-in call recording option.

Record Calls on Android Pie and Q 1. Cube Recorder ACR

In our testing, we found out that Cube Recorder is one of the handful call recording apps that is able to record calls even on devices that don’t have built-in call recorder and are running Android Pie or Q. We tested Cube ACR on Pixel 2 XL running Android 10 and it worked fine. Similarly, we tested it on Samsung S10e running One UI 2.0 based on Android 10 and it recorded calls without any issues. Sure, the audio was a bit mushy but it recorded both sides of the call nonetheless. So you can install Cube ACR on your Xiaomi, Realme, Nokia or any other smartphone and check if it’s working or not. For the best recording experience, open the App Settings and select “Voice Recognition Software” as the input option.

Download: Cube Call Recorder ACR by Catalina Group (Free)

2. Call Recorder – ACR

Call Recorder – ACR is another app that seems to be working on a lot of devices including Samsung Galaxy devices. However, the Play Store version of this app can’t record calls unless your device is running Android Oreo. So you will have to download the unchained version of this app from APKMirror. Be assured, the app is genuine and developed by NLL– the company behind both the apps. Due to some Play Store policy, NLL is unable to host the app on the official Google Play Store. So all I would suggest is, download the APK from the below link and install it on your smartphone. After that, check if the app is recording both ends of your call. We did test this app on Mi A1 running Android Pie and on Samsung devices and it worked fine.

Download: ACR 32.9-unChained APK from APKMirror (Free)

3. Call Recorder by Boldbeast

Download: Call Recorder by Boldbeast (Free, Offers in-app purchases)

Last Resort: Don’t Update to Android Pie or Later

If you’re someone who has to record calls for legal or records purposes and don’t want to take calls on the speaker, the easiest alternative for you is to stick with a smartphone that runs an older flavor of Android. That’s actually quite easy with the Android ecosystem, which is highly fragmented and only a few select devices will receive Android Pie or Q update.

Add Explicit And Clean Tags To Songs In Itunes

If you take pride in how neatly categorised and ordered your iTunes library is, having collated it from physical media, the iTunes Store and other retailers, you may have found inconsistencies among them.

Credits to artists featured on tracks may be displayed differently, and there’s a long-running argument about whether “ft.” or “feat.” is more suitable for music, but there’s one bugbear exclusive to iTunes: the ‘explicit’ and ‘clean’ tags. Those tiny little icons found beside some song titles in your library and on any iOS device you own, apart from the songs you have carefully curated yourself.

Luckily, a solution is at hand, and it’s far from difficult to replicate.


1. Begin by downloading MP3Tag, whether the most recent version or a portable version.

7. Call the tag “ITUNESADVISORY” (without the quote marks), and choose the correct value. Entering “0” will mean that no icon appears, just as before. Entering “1” will give the song the “Explicit” tag, and entering “2” will give it the “Clean” tag. Leaving the value blank will give no tag.

7. After modifying the details for the song, press “Ctrl + S” in MP3Tag to get a window informing you that the changes have been saved.

8. Remove the songs you’re changing from iTunes. As iTunes cannot register changes to the song, you’ll have to remove and re-add them. Make sure you choose to “Keep Files.”

9. Drag the modified tracks back into iTunes, and you’ll see the corresponding tag. Sync them with an iOS device and the same remains true.


Although this solution may seem complex, in practice it is far from difficult. Not everyone may feel it benefits their library, but being able to mark specific versions of songs could be convenient depending on social situations.

Paul Ferson

Paul is a Northern Irish tech enthusiast who can normally be found tinkering with Windows software or playing games.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

10 Cool Android Gestures You Should Be Using

Everyone loves a cool gesture-based UI, remember Nokia’s Meego or Palm’s webOS? Well, they were gesture-based operating systems that did not really take off due to poor timing or just poor hardware. However, they are still remembered for the intuitiveness they brought to the table. And while they are dead gestures are making big comeback as we are moving towards bezel-less future. Both iOS and Android operating systems now support navigation gestures. However, apart from letting you navigate the UI, the gestures can also help you get work done in fast and efficient manner. In this article, we are listing down 10 cool Android gestures you should be using in 2023.

Cool Android Gestures You Should Use in 2023 1. Quickly Select 3-Dot Button Menu Without Tapping

Most Android apps including Chrome, Google Drive, Gmail, and more offer a 3-dot menu at the top right which hides extra actions and additional options. One way to access those options is to tap on the 3-dot menu and then tap on the action that you want to select. However, did you know that you can just tap and slide your finger to select the option that you want to? Yes, that’s right. You can just press on the button and then slide your finger to select the option that you want to. This is a quick gesture which you might not seem such a big deal but can save you precious seconds every time you interact with the menu.

2. Switch Tabs in Chrome

Chances are, you use Chrome a lot on your Android smartphone and if you indeed do, you might have had problems switching between tabs, especially if you have a large smartphone aka phablet. Well, Chrome for Android packs in a couple of cool gestures to help you switch tabs with ease:

Swipe left or right on the address bar to switch between open tabs in Chrome.

Swipe down from the address bar to open the tab switcher in Chrome. Here, you can move to a different tab, add a new tab or close tabs.

3. Switch Account

With the Android 10 update, Google has brought gestures not only to the system UI but also to many of its apps. For example, the account switcher on many Google apps no longer sits under the hamburger menu. Instead, you will find it next to the search bar on the top-right corner. And if you want to switch accounts, you can tap on it manually and choose another Google account, but that would be living in the past. You can use this cool new gesture on your Android device to quickly switch your account. Just swipe up or down on your profile icon and you will seamlessly move to the new account. The animations are also very smooth and it gives a satisfying experience. So from now onwards, keep swiping instead of tapping to complete your tasks.

4. Trigger Google Assistant

Since buttons are no longer the default navigation system on Android, it has become harder to access Google Assistant. Earlier, you could just tap and hold the home button and the always-obliging Assistant would pop up. But with the gesture navigation system in place, what shortcut do you have? Well, after the Android 10 update, you can swipe diagonally from the bottom corners to trigger Google Assistant. I know it’s not super convenient, but it works and I have used it quite a few times without fail. So next time, when you want to summon Google Assistant, you know what to do.

5. One Finger Zoom

Yes, iPhones have had one finger zoom for quite some time, but so does Android now. You are on a webpage and want to zoom in? No need to labor two fingers and pinch in. Just double-tap and slide down. You will zoom in and now slide up to zoom out. How awesome is that? And the best part is that it’s not just limited to webpages. You can perform the same gesture on photos and also on Google Maps. Basically, you can use this Android gesture on everything that supports the zoom operation. So go ahead and zoom in/out while using your Android device one-handed.

6. Remove Icons Quickly

Does it happen that you install tons of apps and your home screen gets cluttered? Well, Android has brought this super useful gesture where you can remove icons from the home screen quickly and in a fun way. Just press and hold the icon and toss it above as if you are throwing something. Instantly, the icon will be removed from the home screen. How cool is that? The gesture is part of the Stock Android launcher, but many other launchers have implemented it including the OnePlus Launcher. So go ahead and test out this new Android gesture.

Google Keyboard Gestures 7. Move Cursor Easily

Moving the cursor to edit text can be really annoying and most of the times, we tend to miss the exact point. Well, if you are using the Google Keyboard, you can just swipe left and right on the Space bar to move the cursor around.

8. Delete Complete Words

Press-holding the delete key is the usual way to delete text but it’s not the most accurate or streamlined solution when it comes to deleting a few words. Well, Google Keyboard lets you drag the delete key to the left to delete complete words. For instance, dragging the delete key to “M” will delete one word, dragging it on “N” will delete two words and so on. When you drag the key, the words that are going to be deleted are highlighted, to give you a better idea.

9. Capitalize Individual Letters

10. Type numbers or symbols quickly

If you are typing text that features a lot of numbers and symbols, you will have a tough time, considering you will have to press the symbols button now and again to add numbers or symbols. Not the best user experience, right? Well, Google Keyboard has a solution for this as well. You can just press hold the symbols key and drag it to a letter that corresponds to a number or symbol in the number & symbols page.

Bonus: Third Party Gesture Apps  Fluid Navigation Gestures

Install: Free with in-app purchases

Ready to use these intuitive Android Gestures?

Update the detailed information about How To Program Nfc Tags Using Android on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!