Trending February 2024 # Can The Tablet Save The Publishing Industry? # Suggested March 2024 # Top 5 Popular

You are reading the article Can The Tablet Save The Publishing Industry? updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Can The Tablet Save The Publishing Industry?

Can the tablet save the publishing industry?

With all the buzz around the iPad and the slew of tablet designs we expect to see at CeBIT this year, it is clear that there is a looming battle if not an all-out war brewing for this new category of devices. The question, however, is where tablets fit into the overall consumer market. I believe that there certainly is a market and a fit for these devices in consumers’ lives. I also believe that there are particular elements of computing that will be better on a tablet form factor then on a mobile device or a PC. For example, watching movies on a device more portable and with better battery life than a notebook certainly has value. The web in portrait mode definitely makes many web sites feel more consumable, particular ones that require a lot of scrolling like news sites. But one of the primary opportunities for the tablet that I think will shine lies with the publishing industry.

Can the Tablet Re-Invigorate the Print Publishing Industry

It would be hard to argue that the publishing/print industries are in a period of industry-wide transition. This transition at its most fundamental level is one that has been catching industries off guard for several decades: it is the transition from analog to digital.

In many of our presentations to clients in the tech industry, we point out that this transition from analog to digital is a good thing, and if harnessed correctly can mean substantial opportunities. This we believe is the case with publishing.

The Internet has been called the demise of many things and many claim it is the cause for the decline in print media consumption. There is some truth to this, however, I would argue that a more fundamental barrier has kept the publishing industry from breaking new ground. That barrier is the book (as we know it today) itself. Let me share some statistics from Parapub, a publishing industry research organization:

1. Generally 80% of US households did not buy or read a book last year.

2. 70% of US adults have not been in a bookstore in the past 5 years

3. 42% of college graduates never read another book after college

4. 1/3 of high school graduates never read a book for the rest of their lives

5. 57% of new books are not read to completion

Since we can generally agree that the vast majority of US adults can read (at what grade level is debatable) then the question becomes why aren’t more Americans hooked on the completely immersive experience of reading a good book?

Some answers could be that people do not have enough time to sit down and read; or that TV and the Internet media are more interesting, entertaining, and captivating than reading a book. These are all reasonable observations and to the intriguing question of why so few American adults take the time to sit down and get engrossed in a book. However, I think the issue is deeper.

So what is a book? This, I think, is an excellent question. If you look at any dictionary’s definition of a book it almost always includes the word “printed,” and I believe this is flawed. The answer, in my opinion, is the creative use of words in order to captivate and/or stimulate the reader. Perhaps this definition doesn’t work with text or resource books, but I feel there is potential for innovation there as well.

The opportunity staring the publishing industry in the face is the opportunity to re-invent the way they create and distribute their content. We have seen a quick glimpse of this with the Apple iPad and the NY Times application which according to them is “the best of the times and the best of the web” in the same experience. We need more publishers thinking like this.

So how do tablets fit in?

The tablet form factor is the most logical platform to have the best experience with this new class of content, as well as become the platform of choice for many mobile computing tasks. The PC is great for a lot of stuff, and so is the mobile phone. Similarly the tablet will be better at some core computing tasks then both of those platforms, and add a new element of mobility and form that the notebook can not. It may very well become the best mobile platform for watching movies and video, surfing the web, viewing and sharing photos, reading and learning, gaming, and anything else that can leverage the kind of hardware we will see flood the market. For many it may become their preferred mobile platform and perhaps for many it may even replace their notebook.

One other implication the tablet manifests is for Netbooks. I have been skeptical of Netbooks for some time despite the growth they have had, which I feel was short term and is now showing evidence of slowing down. Tablets, and in this case the iPad specifically, deliver a superior experience in every aspect to which the Netbook was considered valuable. It is a better web browsing device, because with Netbooks the screen is to small. The difference maker for me in this regard is the web in portrait mode vs landscape mode. The tablet really shines here because the web in portrait mode is incredibly compelling.

Think about this: On my 15-inch MacBook Pro the NY Times website is about four full page scrolls from top to bottom. This is even worse on a Netbook, yet with the iPad I saw nearly the entire NY Times page in portrait mode with almost no scrolling, with the added benefit that using your finger to scroll is much more pleasant and natural then a mouse or trackpad. Tablets are also a better video watching device because Netbooks are underpowered and often require media co-processors to do so. Tablets also provide better battery life and mobile form factor then both Netbooks and Notebooks when it comes to video. Netbooks, I believe, evolve into a new notebook form factor and Tablets step in and take their place in the market.

So what about the Kindle, Nook and Sony Reader? Although there may be a place for stand-alone e-readers as well as other stand alone devices, I feel that the market at large will evolve to include more of these rich media experiences. The kind of experiences necessary to grow this tablet category will require a processor capable of graphics and rich media. Nvidia’s Tegra chip has been gaining a great deal of momentum in this category due to its extremely capable video, graphics and mobile web experience. Many companies in the ARM community are focusing on this market exclusively and Intel intends to focus some of their Atom efforts on tablets as well. Due to this momentum I can assure you we are going to see a great deal of innovation and experimentation around this form factor.

The most important point, however, may not be that the tablet as a platform represents the next generation mobile reading, video watching, internet browsing, gaming, picture viewing, and audio playing device. The most important point may be that with the tablet each of those superior mobile experiences exist on the same platform.

You're reading Can The Tablet Save The Publishing Industry?

Human In The Loop Machine Learning Can Save You From Data Trap

AI models are trained on reams of data collected over the years. But the AI’s role is not to solve a general problem but specific ones. The odds of not finding the required amounts of data sets that suit your specific problem are very high. There is every chance that the team can go on a data-gathering marathon only to end up at a dead-end called a data trap. The AI models’ understanding is based on sheer numbers and cold calculations for them to make presumably accurate predictions. But in reality, they lack the kind of certainty in understanding the context, which humans exhibit. To make up for this gap, human involvement is considered an unavoidable element in executing an ML cycle. This is where HITL or the Humans in the loop mechanism comes in.  A human-in-the-loop model allows humans to validate a machine learning model as the right or wrong one at the time of training. 

A machine learning project precisely begins with data preparation and unfortunately, it is one task that eats up most of a project’s valuable time. Data preparation is absolutely necessary because not spending enough time understanding and labeling the data is a sure formula for a project’s failure. In the HITL model, the labeling task is assigned to a well-minded human being who can differentiate and categorize to make the job of a machine learning algorithm easy in picking the right set of data. The question of how much a human should be involved comes down to the Pareto principle, which ML developers sincerely adopt – 80% computer-driven AI, 19% human input, and 1% randomness. In 2023, Google Health’s medical AI system, DeepMind, detected more than 2,600 breast cancer cases than a radiologist could have. In medical cases, there is every possibility of exceptions. The point of argument here is that employing the HITL model would bring more accuracy to the diagnostic tests, wherein few cases might turn out to be non-cancerous cysts. Definitely, we would prefer to have 99% accuracy to 80% accuracy.

Why is HITL so crucial for ML model development?

To answer this question, we need to understand what happens throughout the cycle. First humans label the data, which constitutes a part of data preparation, for the models are fed only high-quality data. Given the diversity and complexity of practical situations, an ML model should be tuned to all the probable situations which include, overfitting, teaching classifiers about edge cases, or new categories of data in the model’s purview. In quite many cases, it happens that despite all the training and tuning, the model turns unconfident about a judgment or overly confident about an incorrect decision. In the HITL model, a human can just swoop-in with his feedback. Thus, HITL achieves what a human being or a machine couldn’t achieve alone, and with continuous feedback, the machine learns to perform better. Also, HITL provides a larger playground for testing ML models, which is one of the most important MLOps practices.

HITL has your back when Big Data gives in

When the data is too small in size, the probability of overfitting the data values are high. This means the model makes generalizations over a small set of data and when presented with rare values, the conclusions drawn are the direct result of the pattern it learns from the not-so-relatable data. This problem can be addressed either by adding more data, increasing the data set size through data transformation techniques, regularising the data, removing features from data, or increasing the model complexity. Even in the case of underfitting, when the model fails to recognize the underlying pattern just because it has some outliers distorting the picture, similar techniques work.  All these techniques come with few drawbacks – undesired and result in suboptimal predictions. HITL can help in two ways. Either the ML engineer can pause the model to readjust the model for it to restart with enhanced architecture or attempt on-the-fly label correction to mitigate classification errors. ML models are destined to drift with changing databases and hence the need for adjustment, as past performance can never guarantee future results. In all such cases, HITL is the rudder.

More Trending Stories 

The Billion Dollar Computer Malware Industry

As scores of people and corporations are lining up their devices on their individual networks, the volume of personal and confidential information shared on it has grown to an all-time high. In the pursuit of this information, attackers have taken a renewed interest in exfiltration from both individual and corporate environments. This, in turn, has widened the scope of organizations vulnerable to malware-driven cybercrime. That is, in addition to banks and credit unions that are subject to online banking fraud, other organizations that are susceptible to financial frauds include,

Insurance companies

Payment services

Large e-commerce companies


Moreover, the anatomy of attack has grown more cohesive, and its distribution has become even more organized. Developers of the crimeware benefit from the sale or the lease of the crimeware to the third parties who use it to perpetrate identity threat and account fraud. Today the malware industry supplies all the components cybercriminals require to perpetrate malware-driven crome like data theft, financial fraud, etc.

Computer Malware Industry

(Image courtesy IBM software ebook)

There are multiple variants of malware being discovered daily in the wild, capable of exploiting zero-day vulnerabilities. Some of them are designed with polymorphic capabilities. The technique circumvents signature-based detections and changes filename on each subsequent infection to escape detection. This post takes a look at the two recent forms – Ransomware & Cryptojacking.


In simplest terms, Ransomware is a type of malware that prevents or limits users from accessing their system, either by locking the system’s screen or by locking the users’ files unless a ransom is paid. It has been around for several years but has assumed more importance now in the past couple of years.

One factor that can be attributed to the rise of this genus of malware is the expansion of the cryptocurrencies like Bitcoin. The modus operandi involves gaining access to a user’s device first, encrypting important documents/files with a key only known to the attacker. Then, demanding transfer of funds through a currency such as Bitcoin or Moneypak, in exchange for the decryption of the files. In all this, the attacker places a time limit on the user to comply with the attacker’s demands following which all files are permanently deleted and therefore become untraceable, unrecoverable. Unfortunately, the most effective defense against these ransomware attacks, as with purely destructive malware, is regular, frequent backups of systems. Without a backup of a compromised system, the asset owner is at the mercy of the attacker.


What makes malware industry a billion-dollar industry?

Malware is widely available for purchase, therefore providing a profitable way for criminals to commit cybercrime.

Many individuals, particularly youngsters get lured into this dirty business owing to rich rewards obtained for stealing various types of information. A few examples are listed below.


Full identity information $ 6

Rich bank account credentials $ 750

US Passport information $ 800

 US Social Security number  $ 45

These prices may fluctuate in the marketplace depending on the supply-demand criteria.

It’s often observed that most attacks don’t target the organization’s systems, but rather the customer and employee endpoints. Why so? The reason behind this is that organizations invest substantially in multiple layers of security, such as-

In an attempt, to filter out cybercriminals on the perimeter. On the other hand, for endpoint security, organizations have anti-virus software in place which often detects less than 40 percent of financial malware. As such, cybercriminals conduct malware-driven cybercrime, utilizing malware on user endpoints to commit financial fraud and steal sensitive data.

Also, if you know, the malware industry mainly runs Spam or Phishing malware which is written by paid professional programmers. At times, spam vendors even employ professional linguists to bypass filters and psychology graduates to spam victims. There’s no dearth of money! Talented employees can earn in the range of $200,000 plus, per year. It gets even more rewarding for remote root zero-days $ 50-100,000.

Even the workload is smartly distributed. For instance, outsourcing the anti-detection code allows malware authors to concentrate on the payload.

Cyber-dacoity is on the rise and will reach gigantic proportions as time passes by!

The Benefits Of Project Management In The Automotive Industry

By utilizing project management practices, companies are able to maximize their resources and reduce costs while ensuring customer satisfaction.

Additionally, effective project management can help to increase safety standards in the workplace by setting clear goals for employees which need to be met.

Furthermore, through careful planning and implementation of projects within the automotive industry, organizations can stay ahead of trends in order to remain competitive in today’s ever-changing global market.

Benefits of Project Management in the Automotive Industry Improved efficiency and effectiveness in project execution

Automotive project managers are able to better plan and track expenses, as well as anticipate potential issues that could arise during projects. This helps reduce potential cost overruns, which can significantly affect a company’s bottomAutomotive project managers are able to better plan and track expenses, as line.

Furthermore, by tracking progress and activities on a regular basis throughout each project, automotive project managers have been able to identify areas where time or resources may be wasted so they can proactively adjust before any delays occur.

Finally, the use of collaborative software in the industry allows multiple stakeholders to collaborate remotely while still being held accountable for their respective tasks, ensuring nothing slips through the cracks during production process timescales.

Effective resource allocation and cost management

Project management in the automotive industry can help ensure a smooth and efficient production process by providing resources, such as labor, materials, and capital. With effective resource allocation and cost management processes established, organizations can reduce their expenses while also increasing their efficiency.

Moreover, project management helps coordinate tasks across different departments in an organization. This ensures that all relevant parties are aware of what needs to be done at any given time and enables managers to keep track of progress throughout the entire value chain.

Additionally, project management offers visibility into potential risks associated with certain projects or activities which allows for timely action should any issues arise along the way. Ultimately, utilizing project management within the automotive industry helps create more efficient operations resulting in higher customer satisfaction levels as well as increased profitability for businesses within this sector.

Enhanced collaboration and communication among team members

By creating a shared database of project information and tasks, teams are able to quickly access the necessary data in order to effectively plan, coordinate and execute their tasks. This increases the overall efficiency of operations within automotive companies, allowing for faster production cycles and better customer service.

Furthermore, by introducing project management tools into the workplace, team members can communicate more easily with each other as well as with customers or suppliers.

Projects that were previously difficult to manage due to a lack of communication among departments or between different locations can now be managed efficiently thanks to these modern tools.

Reduction of risks and uncertainties

Project management allows for the identification and assessment of risks that could have an impact on project goals as well as allowing for problem-solving techniques to be used in order to mitigate those risks. Automotive industry professionals can use project management principles to prepare plans, anticipate potential difficulties, create solutions and monitor progress throughout all stages of the project lifecycle.

Additionally, teams are able to work together in a collaborative environment with clear objectives through common communication tools provided by project management software applications.

By using these systems, automotive companies can ensure their projects will be completed efficiently and effectively while reducing risk factors associated with them at every step along the way.

Adherence to project timelines and deadlines

The use of project management ensures that tasks are being completed on time and allows for predictive analytics to be used in order to plan ahead and anticipate any potential problems or delays in production.

Additionally, project management provides an organized framework that ensures that resources are allocated properly and deadlines can be met without sacrificing the integrity of the product itself.

Improved quality control and product development

Companies in the automotive industry have long been using project management techniques to help manage the production process and ensure that quality standards are met. From ensuring on-time delivery of parts, managing budgets, and monitoring resources, to risk assessment, project management is a key component of any successful business within this sector.

The use of sophisticated tools such as Enterprise Resource Planning (ERP) systems allows companies to streamline their production processes while keeping track of every step along the way.

Better customer satisfaction and stakeholder management

As project management provides a clear framework for the entire process, it ensures that customers and stakeholders are better informed about the progress of their projects. It allows them to have more control over the project, as well as to provide feedback throughout the development cycle.

This helps organizations to ensure they are meeting customer expectations in an efficient manner, while at the same time ensuring stakeholder satisfaction with minimal disruption or delay.

Challenges and Solutions in Project Management in the Automotive Industry Common challenges faced in managing projects in the automotive sector

To effectively manage projects in this sector requires strong leadership and management skills. A clear project plan should be established which includes objectives and deadlines to ensure that all tasks remain on track.

Risk identification and management are also key for successful project completion as there are numerous potential risks associated with the automotive industry such as supplier failure or product recalls.

Effective communication between stakeholders is essential throughout the life cycle of a project to ensure that everyone understands their roles and responsibilities. Finally, having effective monitoring processes in place will enable quick responses when any issues arise during the development or delivery phases of a project.

Solutions and strategies to overcome these challenges

One approach is a focus on change management, which is essential for implementing new technologies and processes. Change management can be used to identify potential risks, develop realistic expectations and timelines, design appropriate training programs, and manage stakeholders’ expectations.

Additionally, it is important to have an agile project management process that allows for frequent feedback so that any changes or issues can be addressed quickly before they become larger problems.


Project management is considered to be an essential tool in the automotive industry, as it helps companies stay organized and promote efficiency. By using project management processes such as Agile or Waterfall, teams can work together to streamline their operations. This allows for faster product development cycles and better quality control, resulting in improved customer satisfaction.

Additionally, project managers help keep stakeholders informed of progress made on projects in a timely manner. With these benefits in mind, it is clear that project management plays an important role in the automotive industry and should continue to be used for many years to come.

The Roi Of Ranking In Google Search: How Organic Search Can Save You Thousands

SEO and content work, when at its best, provides a provably positive return-on-investment (ROI).

Predicting the ROI, however, can be difficult.

Some sites will try to project potential ROI for ranking in Google Search by looking at what ranking improvements mean in dollars and cents.

Looking for a method of determining the ROI of your content and SEO initiatives? Read on to learn more.

Two Assumptions First

One way to estimate the potential value of a ranking improvement in real dollars is to project how much it might cost to acquire the same traffic with paid search.

Using this type of metric brings a few assumptions into the picture when making predictions.

Assumption 1: Paid Search Provides a Neutral or Positive ROI

If you help a client get 500 new users per month for a particular keyword, and that keyword has a CPC of $3, you could estimate these ranking changes create $1,500 per month in value.

Over the year, $18,000 of value would be created by this ranking change.

Here, you’d be using the CPC for the keyword as a proxy for its value to your client.

In truth, the client:

Might never see a positive ROI bidding such a rate for that keyword.

Or might gain a positive ROI bidding on that keyword with an even higher value.

This uncertainty limits how accurate you might be in predicting the value of influencing rankings for a particular keyword, but it provides a place to start and with estimations that can be helpful.

In order to predict how ranking increases might impact search traffic, there are a few approaches you can take.

There’s even data for CTRs across different industries.

If you want to know what traffic increases might be if you move a client from Position 8 to Position 3 on Google Search, you can calculate this as a function like so:

This calculation also assumes that search volume estimations are accurate, as reported by tools. If the client has reliable and detailed analytics, it may be possible to make much more informed calculations here.

Search results are increasingly complicated, which presents a range of problems. Result pages are no longer a list of 10 links.

Fortunately, you can isolate data to find the percentage of search results for any given domain that has results with alternative SERPs – which might include featured snippets, knowledge panels, video, local results, etc.

This data can help you more accurately project final growth possibilities, even if it does mean leaving out some keywords or manually reviewing certain SERPs to adjust best-guess CTRs if ranking improvements are achieved.

Despite these potential inaccuracies, collecting and analyzing this data allows you to move forward with rough calculations.

If a client can provide data about revenue and traffic volume via analytics – even at a high level – it’s possible to derive ratios that help you predict by what margin your calculations might be off.

Calculating the Current Value of Traffic

Another way to gather similar data and develop calculations is to use a tool like SEMrush or Ahrefs.

This software provides an aggregate estimation of the value of search traffic for any domain, as well as per-ranking calculations when query results are exported.

This calculation is likely derived as described in the above section:

Estimating CPC multiplied by the overall search volume.

Then making estimations about a site’s predicted total organic traffic and summing the values for each ranking keyword.

Let’s look at chúng tôi as an example. (I am not affiliated with Bankrate in any way.)

If you estimate the CPC value for each keyword chúng tôi ranks for, then multiply that by the estimated volume of organic traffic it receives for each keyword, the total value would equal around $35 million per month.

In other words, if chúng tôi were to pay for all its organic traffic (the same volume) in paid search, they’d need to spend $35 million per month!

Setting Appropriate Goals Look at Recent Ranking Losses

When engaging with a new client, your goal is to provide as much immediate value as possible. This often means looking for “low hanging fruit” or opportunities to make provably positive ROI quickly.

One way to identify low hanging opportunities is to identify those keywords where ranking losses have occurred recently and to find ways to restore or break beyond previously held higher ranks.

Data from SEMrush or Ahrefs can be segmented to show keywords that have lost rankings over the past month.

You can extrapolate the value of these traffic losses in totality, grouped by URL, and also by individual keyword.

Look for Keywords and Pages with the Largest Recent Drops

If you look at the keyword rankings chúng tôi lost in the past month, organized by those that had the largest potential revenue hit, you get this.

Looking at this data, you see that a ranking loss from Position 1 to 2 for these keywords resulted in over $100,000 per month in losses.

This could mean a million or more dollars in revenue over the course of a year.

One thing to consider in the above scenario is that you’re looking quickly at rankings for individual keywords attached to individual URLs.

You can take this a step further by analyzing what the net ranking losses are for pages across the domain.

This can allow you to find those URLs whose ranking dips brought the largest overall potential revenue loss. It also gives you a better picture of:

What those losses (and gains) look like across all the keywords each individual page ranks for.

How those fluctuations impact the business in aggregate.

It’s possible that some of the largest keyword ranking losses are also happening on pages that had the highest-ranking gains (for other keywords).

Your task is to find pages that used to do well across a keyword set, helping you narrow down pages/topics/content that can be made more robust to improve rankings.

By then looking at what these losses mean in real dollars, you can set goals that you tie back to expected ROI.

When communicating with a client, you can frame recommended tactics around these pages as work to stem losses that might continue without intervention.

Especially in highly competitive verticals, there can be an arms race for the top-ranking spots, which necessitates constant investment on key pages and topical sectors to maintain rank.

Go After Keywords Within Striking Distance

In setting goals with ROI in mind, one tactic can be to examine those keywords and their associated pages that are within striking distance to the top ranking – pages that rank lower on the first page and are capable of reaching Position 1.

You can segment your data to show you what the potential revenue gains might be if these select keywords were to gain top rankings.

This investment will typically be a function of content improvements, UI/UX improvements, and link building. Budget can be set by understanding what your return will be if you achieve your goal.

You can also do this same calculation but grouped by page, which will allow you to have a rough estimation of which pages might benefit most from continued improvement and authority building.

The end goal of this type of top-level, page-specific calculation is to identify pages whose upward mobility can have the greatest impact value-wise. It can also hint at where improving authority and earning links could have the biggest impact.

Investigating Links

Links are highly important to establishing or maintaining the authority of a page. This is especially true for competitive topics and keywords sets.

Knowing that part of your prescription to stimulate growth will include earning links and building authority, it can be very helpful to know which pages are experiencing a loss of linking root domains when setting goals with ROI in mind.

There is a direct correlation to loss of unique linking domains and the loss of traffic or hard-earned rankings. This is especially true for those with ULDs with high authority.

When you find the pages that have seen a loss of links and are also important to a client’s site monetarily, you can set goals for improving that page.

Developing more robust content to enable new links for these pages is the straightforward way to improve your rank.

You can also then make estimations about the value that will be created as a result of ranking improvements. This is helpful in setting budgets for link acquisition and content development targeted at specific pages or topical categories.

There are two ways you can go about finding pages that are having issues with link attrition.

Method 1: Referring Domain Totals

Tools like Ahrefs can provide you with URL-specific data about referring domain totals.

This data is updated frequently and you are able to collect daily link counts for at least a year.

In most applications, you examine unique linking domain counts for each URL in a site over two, six, and eight months, respectively.

By plotting each unique linking domain count over a time period, you can perform a linear regression on the data.

You can then obtain a trend of the data points, which allows you to sort and compare the upward or downward link-count trajectories of each page on a domain.

Because you also have data about which pages produce the most potential revenue in dollars for a client, you can organize your data to filter the most impactful pages.

This allows you to isolate from your data a select group of pages that have lost significant link volume and whose positioning and ranking are also highly important to the financial productivity of the website.

Here is an example of the plotted unique linking domains (ULD) counts and the corresponding linear regression:

Method 2: High-Value Pages

Another method you can use to identify pages that have had significant attrition of authority is to look at the aggregate value that a link has, or as it’s commonly called, link equity.

In the same way as before, you can look through the data to identify key pages that have high value (bring in top dollars) but that have had (or will have) a likely degradation of their overall domain authority (DA).

You can get this data with tools provided by Ahrefs.

In this instance, instead of looking at the slope of the overall total ULD trajectory (positive or negative and to what degree) like you did in method 1, you can sum the DA of new and lost links to a page to predict if a page has or will soon be impacted by large aggregate changes to their perceived authority because of the changes to the link portfolio.

For example, a given high-value page might have 50 ULDs. Perhaps the distribution of DA for links pointing to that page is 5% with a DA of over 60, with even distribution of other DA links.

It’s important for you to know if high DA links are gained or lost, as they could have an inordinately strong impact on the overall authority of the page.

Your approach then is to average the DA gained and lost over various time periods. When you see high average losses, you can identify those pages that could benefit from regaining some of the strong links they may have lost.

You can also identify pages that may soon take ranking hits as a result of lost, high-authority links.

Note: Averaging the DA of inbound links is slightly more complicated than a normal averaging scenario because DA is a logarithmic scale. In other words, a DA 90 link is not the same value as nine DA 10 links.

In order to get a rough average of the trajectory of gained or lost DA for a page, you collect the incremental new and lost links, their respective DA, and then find the average of these logarithmic values.

You do this while consuming API data from Ahrefs using Python’s NumPy library, which makes averaging these log values fairly straight forward.

In the end, you can identify pages that have lost a significant volume of links or have a link attrition trajectory that corresponds to ranking losses.

You can also understand which pages have lost highly-valuable links, or in aggregate terms, have lost more total DA as the result of changes in their link profile.

It’s surprising that some pages that have a positive growth trajectory when looking at total unique linking domains. But, when you see the quality of those unique linking domains, you find that higher volumes of new links aren’t making up for losses of higher-value links for the same page.

The ROI of Ranking in Google Search

We’ll never be able to measure ROI of ranking in Google search down to the dollar.

However, using these strategies, you can clearly illustrate the value of prioritizing organic search, especially in a relative way.

Use these methods to justify your spend, adapt your budgets for the future, and highlight your past successes.

More Resources:

Image Credits

All screenshots  taken by author, September 2023

The Tao Of Surface: Inside Microsoft’S First Tablet

The tablet segment is an increasingly crowded one, but according to Sinofsky Microsoft’s approach is considerably different from that of its key rivals. The big name in the room – and one liberally cited by both Sinofsky and Panay – is Apple’s iPad, and with its majority share of the tablet market it’s no surprise that Microsoft has been keeping an eye on the iOS pad. Still, Android hasn’t been slow to take on tablets, whether in the flavor Google would prefer or a modified version made to suit OEM ambitions.

The starting points are pretty clear, Sinofsky points out. “Google, starting from either search or from open-source, and building up from a phone. So, they built a great phone and they said “oh, we’ve got to do a tablet” and we’re all familiar with what it’s like to build the experience after you build the experience” the president says. “They went through the whole efforts to redraft the UI, to turn it into a tablet, when they had started really from a phone. And when you buy into a tablet, you buy into… it’s there for the search ecosystem, the Google software, and it’s all good but it’s their perspective”

Surface – Steven Sinofsky + Panos Panay

Amazon perhaps epitomizes the fragmentation story going on within Android today, heavily customizing the OS to tailor it to its own needs. “Amazon did this incredible job on bringing the Kindle Fire to market, and everybody understands what you get when you buy the Fire,” Sinofsky argues, “you buy the device, you buy into the Amazon ecosystem. They look at themselves as a retailer, they look at tablets as a way to buy stuff, whether it’s digital goods or physical goods, and so they want to have a complete experience.”

In contrast, Microsoft comes to tablets – not new, as versions of Windows have supported touchscreen hardware and digital pens since the days of Windows XP Tablet Edition – with a history in more ubiquitous PCs: desktops and notebooks. “And all of those are perfectly rational, good views of why to build hardware and what to do” Sinofsky concedes. “And we of course looked at this challenge, and said, well, we think of PCs as this generic kind of device that can work across a broad range of scenarios, that have a broad range of form-factors, that have extensible platform, that have peripherals and are part of ecosystems.”

Boiling that premise down to a portable device users would keep with them all day, every day, was what led to Surface. “We want to bring all of that goodness to a kind of device that you carry around with you all the time, that has all-day battery life, with its roots in this ecosystem, and its roots in the notion of productivity. And in many ways, that’s where we start with Surface” the Windows president explained. “It’s about really bringing that extra perspective to market – we started with thinking about all of the things that are in those elements, whether it’s things like a USB port, or the design of the case, or the aspect ratio. And all of these things become important decisions in how we build Surface.”

Update the detailed information about Can The Tablet Save The Publishing Industry? on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!