You are reading the article Recovering From A Google Core Algorithm Update With Lily Ray updated in November 2023 on the website Eastwest.edu.vn. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 Recovering From A Google Core Algorithm Update With Lily Ray
Your browser does not support the audio element.
For episode 176 of The Search Engine Journal Show, I had the opportunity to interview Lily Ray, the SEO Director of Path Interactive.
Ray, a sought-after conference speaker, talks about what to do when you’re looking to recover from a core update or a declining SEO performance.
How common is it for people to really see an opportunity to “recover” these days? You hear Google talk constantly saying, “We did this update, there’s nothing you can do.” Do you buy into that?Lily Ray (LR): Yeah, I don’t buy into that because my team and I here we help clients recover. But that being said, it’s very, very difficult so I understand why Google says that.
And another thing they always say is like, “We tell people that there’s nothing you can do because we don’t want webmasters to go out there and just frantically change a bunch of things that maybe weren’t actually problems, think more long term than that.”
So I do think recovery is possible, but I think it requires a really, really heavy investment in resources and in time and a lot of patience as well.
One thing that we see a lot of is Google rolls out these core updates several times per year, but maybe for two or three core updates after you’ve been addressing some of the problems with your website, you might not see any immediate impacts in performance or positive impacts in performance.
You might even see some negative ones, which can be really disheartening. But, over time, if you invest the right time and energy and focus on the right things, you will ultimately see a recovery.
It might not be a full recovery, it might just be partial. But we’ve seen time and time again that it’s possible to recover.
Brent Csutoras (BC): So obviously seeing a decline is concerning, for company owners and businesses themselves but also for the people who are managing those offices or those initiatives… So how do you approach [these changes]? What would you say are the beginning steps to assess why you’re seeing a change?
LR: I think the first thing we like to look at is which algorithm update affected them.
Maybe they’ve seen improvements over the last couple of ones from 2023 but then in 2023 they started to see some big negative declines on. We like to assess maybe what happened on those dates.
What do we know about what types of industries were affected on those days?
Was it something that the industry thinks is maybe link-related or E-A-T-related?
So we start with that and then we dig into the data and obviously look at what’s really happening with the sites.
Using Google Search Console, for example, you can get a really good glimpse of which particular pages were affected, which keywords were affected.
Google talks a lot about the fact that it really has to do with relevancy.
So it might be that your website’s perfectly fine, but they’ve kind of recalibrated something in terms of what’s relevant for that query and your website or your webpage might not be the relevant thing anymore.
So we gather data about:
What’s actually happening in the landscape now?
What types of websites and pages are currently ranking for the keywords that we use to rank for?
What types of elements do they have on their page that maybe we don’t or what does their link profile look like compared to ours? Or are they a much more trustworthy organization than we are?
So it’s very case by case, but you start with a high-level theory and then you really have to dig into the data to see what’s really happening.
What should people be looking for to really determine what’s happening to them?LR: The thing about recovering from core updates is that it’s very multifaceted. It requires looking at a lot of different components of what’s affecting your website simultaneously.
So there’s no silver bullet, there’s no singular thing that you can do to recover, unfortunately, which makes it hard for somebody who doesn’t have a depth of experience in recovery to address some of the problems that might be affecting the website.
But what I like to tell people and what we kind of do here at my agency is we start by doing like a gut check and really asking ourselves like, “Is this truly a high quality website? Is this content truly helpful?”
Because what can happen is you get caught up in thinking your own content is great or thinking that your own SEO strategies are great – because maybe they worked for five or 10 years – and this is actually becoming more and more true in the past couple of years with the algorithm updates.
Some of these sites have been enormous in terms of their market share and how successful they’ve been with SEO. They have a whole team of writers that are writing in the style that they’ve learned works for them from an SEO standpoint.
And suddenly those strategies stop working, which is kind of terrifying. So what we do a lot of is like the clients will come to us and say, “We’ve done everything. We’ve had a great SEO strategy and a great SEO team and we’ve been doing this and it’s been working for us for years.”
And then we kind of start to get under the hood and we say, “Actually like this doesn’t work anymore or this never should have worked in the first place or your content is maybe not as good as you think it is.”
So it’s a lot of tough conversation.
How often are those situations where there is nothing for you to do other than to ramp up the SEO strategy as a forefront and a focus to begin with?LR: In my experience what happens is when you start to really dig into what’s happening with the website, maybe the technical performance, some of the history and the other strategies that they’ve used throughout the years, particularly as it relates to links.
You’ll start to uncover like, “OK, there’s a bigger problem here than maybe we realized.”
Like we’re working with a site right now where we keep having to ask them questions about their backlink profile because, from their perspective, “No, no, everything we’ve done was legitimate. We worked with the best legitimate companies to pay for links.”
Which is like the key term, right? And we’re like, “You know what? Like I think maybe Google’s getting smarter about that over time. So maybe that worked for you a couple of years ago, but it’s not going to work anymore.”
And their perspective is that everything they’re doing is great because it used to be great.
So it’s tricky, but I think it requires knowing the direction that Google’s algorithm is going in. And I think that’s made a lot of big changes and big leaps in the last couple of years.
We know from the Search Quality Raters Guidelines where they’re trying to go with the algorithm and a lot of the times the things that are in those guidelines don’t jive with the strategies that companies have been using for the last 10 or 15 years.
How much content should people be creating these days?LR: It’s a great question, I don’t think there’s a specific number and it really depends on your industry.
I’ve seen some sites that they don’t publish very often and they just have a small handful of really meaningful and helpful evergreen pieces of content and that’s all they need. And maybe they build one new one a month or one or two new ones a month or something along those lines.
I think this notion that a lot of companies have, which is we need a new blog constantly. We need one a week or we have this kind of editorial calendar that gets you into a situation where you have a lot of content that probably isn’t performing very well and you might not be auditing that all the time.
And that’s one of the things we look at when we’re recovering from core updates is like do you have 10,000 articles that are not really doing anything?
It might even be bringing down the overall quality of your site. So I think it depends on your industry and the demand for content, but more content is not always better.
More Resources:
To listen to this Search Engine Show Podcast with Lily Ray:
Listen to the full episode at the top of this post
Subscribe via iTunes
Sign up on IFTTT to receive an email whenever the Search Engine Journal Show RSS feed has a new episode
Listen on Stitcher, Overcast, or Pocket Casts
Visit our podcast archive to listen to other Search Engine Journal Show podcasts!
Image Credits
Featured Image: Paulo Bobita
You're reading Recovering From A Google Core Algorithm Update With Lily Ray
Google’s Page Experience Update & Core Web Vitals With Martin Splitt
Your browser does not support the audio element.
Google’s Developer Advocate, Martin Splitt joins Search Engine Journal Founder Loren Baker in this live Q&A about Google Core Web Vitals, the delay to June for the Page Experience Update, and other overall performance and speed needs for websites to better compete within Google and convert users.
Here is the entire transcript of the show (please excuse any transcription errors) :
Loren Baker:
Hi, everybody. This is Loren Baker, Founder of Search Engine Journal. And with me today we have a special show all about core web vitals and the page experience update. With me today, I have none other than Mart Splitt of Google. Hey Martin, how’s it going?
Martin Splitt:
HI Loren. Pretty good. How are you doing?
Loren Baker:
Pretty good, thank you. Thanks for staying up so late on a Thursday evening in Switzerland. It’s 11:00 AM here on the West Coast so I really do appreciate it. I’ll do the same for you in the future.
Martin Splitt:
Aw. Thank you very much, Loren.
Loren Baker:
So yeah, we have some folks hopping on but let’s first get started by could you take a second just to introduce yourself Martin, a little bit about what you do, what you focus on et cetera, et cetera.
Martin Splitt:
Sure thing. Yeah. So my name is Martin Splitt, I am a Developer Advocate at the Search Relations Team here at Google. I work with Gary Illyes and John Mueller and Lizzi Harvey and Cherry Prommawin and all the other lovely people. While our team is generally concerned with Google Search, I most of the time, specialize in rendering, crawling, indexing, and specifically JavaScript, which is usually influencing the core web vitals, that’s why I am very happy to talk about that topic as well. There might be super specific questions that I might not have an answer to, in which case I would refer you to our wonderful webmaster forum or the office hours or Q&A sessions that we do on YouTube every now and then.
Loren Baker:
Excellent. Well, it’s a pleasure to have you. Too bad we couldn’t have Gary and John but we’ll get all three of you at once, I think, maybe underwater or something. You’re also a big diving, you’re a diver as well, right?
Martin Splitt:
Yes, correct. Yeah, I do dive in warm water as well as cold water.
Loren Baker:
Okay, nice. Well, no pun intended but let’s dive in the core web vitals. Okay, all right.
Martin Splitt:
All right.
Loren Baker:
Martin Splitt:
There…
Loren Baker:
Okay, go ahead.
Martin Splitt:
No, go ahead.
Loren Baker:
And do you expect it to happen all at once or over the course of a week or more on that?
Martin Splitt:
So there is no specific date that things will start happening. Currently, the announceable is mid of June so it might be anytime in well what would consist of mid of June. It will not be a off-on kind of situation, it will gradually roll out, it will gradually add things to the mix of signals and it will gradually start being effective. So not like a full-on switch from nothing to all of it and there’s no date announced yet.
Loren Baker:
Great. So we’ll do what we can to prepare for Mid-June.
Martin Splitt:
So I think the timeline is roughly starting mid-June and then should be fully in effect at some point in August
Loren Baker:
Okay good. We have the beginning of the summer to the end of the summer on that front. And you said things will be rolled out gradually, do you see any signal becoming more important in that rollout or prioritized?
Martin Splitt:
Not that I’m aware of.
Loren Baker:
Okay.
Martin Splitt:
What I do know is that at the beginning we will definitely roll out for mobile first and then eventually desktop will join the mix as well.
Loren Baker:
Which was confirmed I think earlier today, right? So mobile and desktop for those of you that are possibly only focusing on one or the other, it’s time to focus on both, right? Which is interesting because I find, from an SEO perspective a lot more companies seem to focus on desktop even if the bulk of their traffic is mobile. So thank you for bringing more awareness to mobile experience and most mobile usability as well.
Loren Baker:
All right. So I’m just going to dive into, not to use a dive thing again, but a lot of different questions on core web vitals and the updates, and then we can take it from there. So one question I see a lot of is, how relative is core web vitals to the space that someone competes in? So for example, if they’re traditionally competing against other sites which are slower than them, and have not updated their core web vitals, and they’ve updated them a bit but they’re saying they’re scoring mediocre, maybe their scoring needs improvement across the board, maybe some good, and maybe some bad. But when they check out the bulk of their competition, their competitions pages aren’t scoring very well at all. Is it still as important to prioritize all of these fixes if the folks in their competitive space are not? And how much of a difference will that make?
Martin Splitt:
This is really, really hard to answer because it obviously is one out of many signals and obviously or it should be obvious that relevancy and good content still matters more than speed, because that content delivered fast is still that content. So assuming all other things being equal, they never are. All other things being equal you might see that the core web vitals then have a tiebreaker effect where you would see a ranking improvement. Obviously that is practically never the case so you might, depending a little bit on your niche and on the specific circumstances on your page versus your competitors pages, you might see bigger effects so you might see smaller effects depending again on the query, on the intention, on the location on all the other factors that might be there. So I can’t say it’s not going to be a big shift because for some people it will be a big shift I can’t say it’s going to be a small shift because… It’s not going to be a small shift because for some people it will be insignificantly small. So that’s something that remains to be observed.
Loren Baker:
Do you think that shift will grow over time though, even if it is small at the beginning.
Martin Splitt:
Loren Baker:
Martin Splitt:
Yeah.
Loren Baker:
Is the core web vitals update going to give people a break, so to speak, if they’re using a third-party app which is leading to their site having lower scores than if they would if they had no apps on the page? Kind of a strange question. I mean, a lot of these…
Martin Splitt:
No, I understand where you’re coming from. Yeah, I understand where they’re coming from and I’ll probably answer another question following up on that one which might be, what about using certain Google products like Ads or GTM or Analytics. The answer for all of these questions is pretty much the same. Think about what are we trying to do with the page experience signal? What we’re trying to do there is we try to quantify what makes the user have a good experience with a page. And it doesn’t matter what tools are being used, what libraries, frameworks are being used, if there’s JavaScript on the side, if there’s no JavaScript on the side, if there’s apps on the site, if there’s all first party on the site, if it’s using Google Analytics or Google Ads or Google Tag Manager, none of that matters if it slows down the page, it’s detrimental to the experience of the user. It doesn’t matter where the reason is coming from, if it’s like bad first party code or bad third party code, everything is possible to do with less impact on the core web vitals then it is probably done right now out of not being aware of that being a problem or a lack of care or other technical reasons that need to be addressed at some point.
Martin Splitt:
Martin Splitt:
And we can argue about if the core web vitals are really completely modeling that. I would say they don’t but it’s the best approximation that we have right now and actually measuring performance and measuring experience for you this is really, really hard and we will see an evolving set of metrics as part of the core web vitals evolving over time. But generally speaking, the idea is to give pages that are giving a good experience to users a boost. And I don’t think a good experience is if I am reading something about the article, I’m potentially going to buy, and then whatever I’m reading is shifted down because there’s some review stars popping in on the top. Does that mean you shouldn’t have review stars, no, have review stores, but make space for them so that when they pop in nothing else moves on the page. It’s not that it’s impossible to do this.
Martin Splitt:
I get this question a lot with cookie content banners. So is a cookie constant bundle that I have to have for legal reasons is that going to drag down my CLS? Probably yes if it’s implemented in a way that is disruptive to the user it might actually cause cumulative layout shift. If it’s only causing a little bit of it that’s not even a problem, we’re not saying zero is what you need to target, you’re needing to target something that is reasonable which I think is 0.1 which is the percentage of the effective view port and the amount of shifting that happens so there is a certain amount of shifting that can happen without basically falling under the threshold of what core web buyers consider a good experience. But if you implement it, let’s call it lazily, and just go like, “Yeah, it’s going to be fine, yes, it’s going to move everything below once it pops in,” then that’s not a great way of implementing it and you might want to reconsider the way that you implement it.
Martin Splitt:
If you’re not implementing it because it’s coming from a third party, let them know, tell them, “Hey, by the way we noticed that your solution does this, we really like your solution but we really don’t like how it kind of treats our users, so would you consider fixing that?” And there are ways of doing it, it just needs to be done.
Loren Baker:
It’s a good point on many levels. One, you may think something is good for the user because you think, “Oh, having this review section helps people know that this product was reviewed well therefore I should have it and the user wants it.” Question one, does the user really want it, right? Question two, if the user does want it, how do we implement it so it doesn’t move the rest of the page? Same thing with chat buttons, I’m seeing more and more, just as an average internet user, especially in the mobile device, I’m seeing more and more static chat buttons utilized and getting rid of these dynamic chat buttons which I’ll try to scroll down a product page and I’ll hit the chat multiple times because I have giant thumbs, right, or something along those lines, right? And you have to take that into account.
Martin Splitt:
There’s one shoo that I really like but I don’t visit on mobile and that has cost them sales actually because they have a chat that pops over everything on the mobile. On desktop it’s actually not very intrusive, it just pops in the corner, I’m like “Yeah, fine whatever,” I ignore that. But if I’m trying to buy a product and I’m looking for the product and I go to that product page and then a huge chat takes everything away and I have to awkwardly scroll on the mobile phone and then tap it away and then actually… Not great user experience, I’m sorry.
Loren Baker:
Agreed 100%, happens to me all the time actually. And then it’s funny too because a lot of the eCommerce companies that I talk to about chats, I’m like, “Do people use the chat button. Is this important to you? Does it convert?” “Oh, I don’t know. Not sure. I don’t know.” Well, that chat button is currently ruining the user experience.
Martin Splitt:
Loren Baker:
Yeah exactly. Next question that comes in, which you alluded to, does this also apply to Google Analytics, Google Ads, Google Tags, anything on the Google side that’s powered by Google maybe not the same division of Google that you work within. Does this also effect the site negatively?
Martin Splitt:
Loren Baker:
Right, and then also analytics is front-end implementation so there are ways to change how it’s implemented on this side too. And it helps keep the rest of Google accountable and you’re right, it would be a little bit unfair on that side to do so. Are subdomains evaluated independently or part of the root domain for core web vital scoring?
Martin Splitt:
I actually don’t know that specific detail, that’s something that you would have to ask elsewhere. That’s where probably being the webmaster forum is a good place to ask these questions.
Loren Baker:
Okay great. Another question that came in that’s a little bit similar is, are no index pages being used to evaluate a site’s core web vitals as well as index pages? So pages that are blocked from indexing and/or disallowed from content updates.
Martin Splitt:
Right. I mean, in the end, a page gets a boost, if it’s not in the index it can’t get a boost in ranking, right? You have to be in the index to be ranking. So if you want to see a ranking improvement on something that is not indexed then nah.
Loren Baker:
Martin Splitt:
As far as I’m aware, we’re not mixing these things and specifically the ranking, again, is per page. So for that page, we wouldn’t have any data because we don’t put it in the index, so we can’t store any core web vital results for that specific thing or look it up from whatever data source we’re looking it up at. What I don’t know is if we are accumulating, and again, I do not know the answer to that then. What I don’t know is if there is some sort of accumulation that we do in case we don’t have signals for something, but it’s not as if like, “Oh, you have a page that doesn’t pass core web vitals hence, there will be no core web vitals buttons boost applied to your entire site.” That’s not how that works.
Loren Baker:
Okay gotcha. Well, anyway it is a good excuse for people to get their ad pages in line.
Martin Splitt:
Loren Baker:
But those pages are picked up if someone accesses them via Chrome.
Martin Splitt:
Yeah, exactly. If people are visiting them the data comes back into the data collection. What doesn’t happen is this page will not get a ranking boost from core web vitals because we might have the ranking the core web vitals data but it can’t rank if it’s not being indexed and no indexing means it’s not being indexed. So that kind of doesn’t check out, right?
Loren Baker:
Martin Splitt:
It’s neither. It’s not even a coincidence. Page speed has been a ranking factor before.
Loren Baker:
That’s true.
Martin Splitt:
So it has nothing to do with page experience in this case but it just coincidentally, by making the site better accidentally, you got a ranking boost from something that is not page experience.
Loren Baker:
That makes a lot of sense actually, so kudos for getting your page sped up before the page experience update goes out. You may be seeing an improvement in ranking because of those changes that you’ve done but not necessarily because of the page experience update. Okay, that makes perfect sense. Next question. Why does Google PageSpeed Insights show sometimes a completely different result to lighthouse performance reports on the same page. So if someone’s doing a page report on Google PageSpeed Insights compared to Lighthouse or maybe compared to Lighthouse within the Chrome browser, why would they be seeing different testing results on that front?
Martin Splitt:
I would be very surprised if you would not be seeing different testing results. To be honest, I would be surprised if you don’t see different testing results with Lighthouse when you test over multiple days. That’s because, as I said, quantifying performance is actually really, really tricky and there’s lots of factors. And then you have to understand where data comes from. So there’s basically two gigantic buckets of data that you can look at. One is real user metrics, that’s the data, that’s the telemetry data reported back by Chromium browsers for users who have opted in into sending telemetry data back. That you can see in Chrome UX reports, there you have the data that we are getting an anonymized form in terms of how fast the pages have been for actual users out in the field. And that obviously is already data that is very, very unstable in the sense of if one day I have 100 visitors coming to my site on a fantastic broadband connection on a recent MacBook they will probably see that even my website being terrible is probably going to be okay because the network speed is fast, the computing power is available, and that kind of smooths this out for them. And then the next day it’s people on small or slow phones, on shaky mobile connections with high latency, and then everything will be looking a lot different from that.
Martin Splitt:
Obviously as data is collected, we are making sure that our sample size is large enough so that it’s not like 10 people today 10 people tomorrow that would give us completely unusable data. But if the sample size is large enough and the time frame that we’re looking at is not just one day but like a week or a month, then the data starts smoothing out and you can actually draw conclusions from the signal you’re getting there. You can’t really do that like by looking at snapshots. And that’s field data, that’s what we are using in page experience. But we are not, not using in page experience, at least not planned to do that anytime soon, is lab data. Lab data is where you are running a program in some sort of form and then actually try to gather the data and get the data that would be sent as telemetry, and there are multiple tools like that. There’s web page test, there’s PageSpeed Insights, there’s Lighthouse there’s chúng tôi there’s a plethora of other third-party tools that do these things. And the thing with Lighthouse, especially the Lighthouse that you might be running on your machine in Chrome, is that it does a simulation, it runs within chrome so it is affected by things such as other things running on your computer.
Martin Splitt:
If there is something else that takes away CPU power because you are converting your video in the background or your computer is doing an update or something, if you’re like bittorrenting something to a friend or whatever, then that might saturate your network so you might actually get a lot of jitters, so noise versus signal from Lighthouse. And I know that when I run Lighthouse 10 times, I basically get nine different scores and that’s expected, it’s not real user metrics it is lab data, a lot of the things like LCP, that’s a heuristic so it tries to figure out statistically speaking, get reasonably sure what we think is the main content, is the largest content, and what’s the largest content full pane, and then that that’s when we stop the clock.
Martin Splitt:
But sometimes things just take a little longer, sometimes your browser might take a little longer to actually spawn a process because your processor is busy with other things, and then things take longer. And if they take longer that means you might actually flap around the threshold, right? If it’s like, “Oh, we need to be done with this in two seconds,” and like one time you are done in 1.8 seconds yay the next time it takes 2.2 seconds oh. And then sometimes because your computer might do some bananas heavy lifting computing tasks in the background that you’re not even aware of, it might take five seconds and then you get like very, very wide variety of data, and that’s just how lab data unfortunately is. Unless you have a controlled lab environment, where you’re like, “Okay, so we are requesting the website from a local server so that we can rule out any network weirdness and we are doing it on a computer that does pretty much nothing else than just that all over again, then you get more or less the same scores and even then because it’s heuristically, so it might decide slightly differently what it considers to be the largest conventional pain so that you might actually, and same with FID, same with LCP, you might get slightly different values for these as well. So there’s always some noise in that signal.
Martin Splitt:
And then with PageSpeed Insights, PageSpeed Insights is basically just running in the cloud somewhere, it is leveraging Lighthouse but it’s not running on your computer, it’s running somewhere else in a different environment. I don’t exactly know what this environment looks like because I haven’t really had any insights into that. I hadn’t had any insights into PageSpeed Insights. So I’m assuming that it’s like some sort of shared server infrastructure and you might see differences in depending on how much it’s leveraged and how much available capacity it has at different points in time, so you might actually see fluctuations within PageSpeed Insights, but it’s definitely going to be different from your website being tested on your computer in Chrome’s Lighthouse. And that is, to begin with, if my server happened… I don’t I don’t know where PageSpeed Insights lives. Let’s say PageSpeed Insights lives in Virginia in the data center.
Martin Splitt:
So if my website is hosted here in Switzerland and I test it on my local machine, network doesn’t really play a role because it’s like milliseconds to go to the other end of Switzerland go to my server and get the website back. It’s going to take a while to go over the ocean to the page speed insight server in Virginia and then actually like have that communication happen, so it’s inherently going to be slower. And I think network is mostly… I do see that in the time to first bite being different, for the core web vitals that doesn’t matter so much, but still this machine that it simulates I think it simulates a Moto G4 phone, is going to have very different specs than a Moto G4 simulation on my MacBook. So we are going to see different scoring across the tools and even within the same tools they will fluctuate.
Loren Baker:
What’s the most accurate tool to utilize as provided by Google that has most field data within it?
Martin Splitt:
I guess the best way of doing it is PageSpeed Insights right now because at least you’re getting roughly the same instance and roughly the same configuration. And it also shows you field data as well from CrUX if that’s available, so you get lab data and field data and PageSpeed Insights, which I think is great.
Loren Baker:
Great. We have some really good questions coming in. I encourage any viewers right now if you have any questions to ask them. Before I get into the one that just popped up from Gabriel, I do have a question that came up during a webinar in the past with Keith Goode, who I also see is on from IBM. So the question that was asked previously was, “Hey, I work on multiple different sites, one I’ve optimized and I see everything is passing and search console reported the changes almost instantly. Another site that I’m working on everything is passing according to the tool sets that we utilize, same exact thing, we’re not seeing any data yet in search console after three, four weeks what’s the difference…
Martin Splitt:
Not enough field data.
Loren Baker:
Not in the field data. Not enough visitors, not enough field data.
Martin Splitt:
And even… It can be enough if these visitors are not generating telemetry data then we are still not having the telemetry data. And even if we have some data it might not be enough to for us to… oh damn, the word has to slip me. Ah confidently say this is the data that we think represents the actual signal, so we might decide to actually not have a signal for that if the data source is too flaky or if the data is too noisy.
Loren Baker:
So it may take time.
Martin Splitt:
It may take time.
Loren Baker:
No difference whether it’s more traffic, less traffic, just it takes time to put together.
Martin Splitt:
Yeah. I mean, more traffic is more likely to actually generate data quickly but it’s not a guarantee.
Loren Baker:
Okay. Gotcha. So don’t freak out if you fixed everything and you still don’t see the reporting there. If you feel confident and then once Google feels confident with all the data that they’re able to compile, it should update in time. Next question from Gabriel, “Hey Martin, does Google calculate core vitals looking only at the last 28 days of RUM data?
Martin Splitt:
I don’t know.
Loren Baker:
And does this range impact the rankings?
Martin Splitt:
I don’t know. That’s a really good question. I can try to follow up with the team to figure that one out but I don’t know at this point.
Loren Baker:
Excellent. Thank you for the question Gabriel. Hopefully we’ll have a follow-up soon. Okay, can you confirm or deny if visits from a Google search result from an amp site will use the data from the cache page load to determine core web vital metrics. If that is how it is factored then won’t all amp search visits get perfect LCP and FID scores?
Martin Splitt:
I don’t think it works like that.
Loren Baker:
Okay good. I think that would also determine on how the template was set up as well. Are there any CMS platforms that you think will be most impacted by this update and why?
Martin Splitt:
Don’t know.
Loren Baker:
Okay, it could possibly be CMS platforms that have a lot of additions and layers that are added to them as well on that site, but not sure. Is there going to be any kind of leniency for companies that are having a hard time getting their developers to implement these fixes on this front?
Martin Splitt:
We have announced it last year, we have pushed it back from May to June. At some point, it’s going to happen.
Loren Baker:
Loren Baker:
Especially as you’re elevating this internally to your devs to make these changes, which are critical, and the Google team, to Martin’s point, has given us enough time to get ready for this. You’ve been able to get these fixes in, you’ve been able to build a case for it. At the same time, don’t just fix these issues if it’s showing up negatively on a Google score in webmaster tools. If you’re able to identify usability issues, chances are they’re going to haunt you further down the line from a ranking perspective as this becomes more important. But secondly, you might uncover something that’s keeping people from converting, that’s keeping people from sharing, that’s keeping people from experiencing all the content that they should be experiencing.
Loren Baker:
Can they scroll down? Do your jump links work? What’s happening when they’re trying to load a large infographic image on a small phone? All of that is a component of this really at the end of the day, so don’t just optimize your user experience for a score that Google gives you. There’s plenty of different services out there that will give you feedback from real users as they’re trying to scroll through your site. So sorry about that. Next question, Martin, what happened to your unicorn hair?
Martin Splitt:
Diving and cold temperature has happened. Over the winter, it was really cold, I continued diving and long hair and diving don’t go too well together when it’s cold. And it was not convenient so I just cut my hair short.
Loren Baker:
There you go. That explains quite a bit, that explains quite a bit on that side. Let me go through the rest of these questions that are coming in here. Okay, so this is interesting. I’m not sure if you can answer this or not but, if someone is in a situation where they’re using various different tools and add-ons and apps and plug-ins to be able to make their user experience “better” or upsell the user or whatever it is, and those tools aren’t making the changes at the end of the day and they can’t implement them differently, should they be looking at different solutions.
Martin Splitt:
I guess looking at different solutions is definitely a good idea. I mean, if you had, I don’t know, if you had something that would, let’s say you own a car you drive a lot and you have something that somehow reduces your fuel consumption but it makes you crash into a wall every third day. I mean the lowering and fuel consumption is amazing but there’s like this annoying side effect that you crash your car every couple of days, maybe only every couple of months, maybe only like every six months you crash once.
Loren Baker:
Tesla analogy?
Martin Splitt:
No, but okay, but that would be the case, I would say that the issue with it outweighs the benefit of it and there might be other ways of reducing your fuel usage that you might want to look into, like a different style of driving, a different kind of car. Similar here, if it gives you more stuff that potentially is great but then it has these implications you have to judge for your specific case if you’re okay with the implications that it has or if you’re like, “Nah, I’ll try to see if we have something else that does that without the problems.”
Loren Baker:
Loren Baker:
So just look into that because it might just be a better user experience at the end of the day and it might improve the strengths that you’re seeing as well, right? So maybe it’s time to get a new car that’s not crashing all the time and gets better gas mileage or is a little bit more carbon neutral. Okay, next question has come in, which is kind of interesting is, if my core web vital scores are really good from a mobile experience perspective and then two different scenarios, they’re either really good from a desktop experience perspective or they’re really bad from a desktop experience perspective. Will I then rank better in mobile first or for mobile users if my core web vital score is better on the mobile side than it is a desktop side or is there some kind of aggregate score looking at both experiences that are being utilized to weigh a site because you don’t necessarily know how people are going to access from one device to another in the future.
Martin Splitt:
I am not aware of any aggregates at the moment, that doesn’t mean that there won’t be in the future. As far as I’m aware right now, mobile is being used for mobile and desktop is going to be used for desktop.
Loren Baker:
Okay. Mobile score, desktop score, no one really knows what the future holds. So make sure it’s both. Hi Crystal. So another question I’m just going to add on to that a little bit, if a site is seeing 80 to 90% mobile users, right, 10 to 20% desktops, say most of your B2C oriented sites, shopping sites, things like that, should they really be worried about desktop at the end of the day and if they don’t address their desktop experience will that negatively affect them on the mobile side?
Martin Splitt:
I don’t think so.
Loren Baker:
Or just not bother?
Martin Splitt:
I don’t think you need to worry about that too much then.
Loren Baker:
Martin Splitt:
I mean, the core there would be then, if you implemented it in a way to trick the system, it wouldn’t necessarily work because especially CLS is calculated over the lifetime of the page so you would still see shifts if they would appear after user activity. The other thing is if content only pops in after user activity Googlebot probably wouldn’t see it, except you use lazy loading and again that’s like if you try to trick, you’re easily inviting more trouble than it’s worth. And I’m also not sure how well that would work and if it would work now, we would definitely have an incentive on figuring that out in the future. If you are using techniques such as like infinite scroll and lazy loading and you implement them correctly, it might actually have a positive effect on user experience and thus on the core web vitals as well.
Loren Baker:
Martin Splitt:
Loren Baker:
Excellent. Another interesting question that comes up is with Google Chrome DevTools and testing the page load, testing and documenting the page load experience within DevTools and pulling out CLS information specifically, do you recommend that storage is cleared within Google Chrome or, like you had said earlier, any apps or tools that may be running on the back end that affect overall usability, loading, timing, et cetera, et cetera, on that. And what’s the best way to prepare Google Chrome to be able to accurately pull these experience numbers if someone’s trying to say visualize on the timeline all of these issues with their development teams?
Martin Splitt:
I would probably try to basically either launch a completely separate Chrome instance and then that be probably the incognito window of that instance so that I’m not having any extensions or stuff available there. Or just really use something like Puppet which also launches an entirely independent Chrome instance and I would do that on a machine where I have as few applications running as possible. Maybe I run like a virtual machine somewhere in the cloud that does nothing else but run my Chrome instance and then give me the data back. Because, as far as I’m aware, I’m not sure how to get that through Puppet but then puppeteer, but there must be a way of getting the profiles out and then you can import the profiles file into your browser and actually investigate it with your DevTools in your browser, so that’s definitely a possibility.
Loren Baker:
So that was Puppeteer?
Martin Splitt:
Puppeteer yeah.
Loren Baker:
Pptr.dev
Martin Splitt:
Yes.
Loren Baker:
I’ll drop a link in here, in the stream yard right now. Up tier. Go and check that out afterwards. I had a lot of questions about Google being able to actually identify what the issues are if Google can identify UX issues and core web vitals within Chrome on a page by page level, can we get that from a reporting perspective within search console?
Martin Splitt:
Good question. I guess that’s not always going to be easy especially because we have to make sure that we are not leaking too much information because that might make things less private than you want. But I don’t know if there’s anything planned to give you more insights into that. You can get a lot of insights about that already if you’re testing in your local DevTools, but I don’t think there’s anything planned on the roadmap for search console.
Loren Baker:
Excellent. So let’s see what day is today, it’s May 20th I believe. You had said we have… the clock is ticking, actually…
Martin Splitt:
The clock is ticking, tick tock.
Loren Baker:
… if we were on the original timeline, we’d probably all be freaking out right now. But we’ve been given an additional month, more or less, right? You would say it’s probably going to be, we’re looking at a mid-June rollout, lasting until mid, late August or lasting until August so a slow rollout, some things changing over time. For those that are currently viewing or listening right now, what tips do you have for if folks have not been able to get these fixes in, if they’re currently working on it. If it doesn’t look like they’re going to be able to get them in before mid-June, any tips or anything that you can add to this discussion that you’d like to that give people maybe a better piece of mind and/or kind of it’s time to put the pedal to the metal?
Martin Splitt:
So I think, first things first, don’t panic, don’t completely freak out. Because as I said, it’s a tiebreaker. For some it will be quite substantial, for some it will not be very substantial, so you don’t know which bucket you’ll be in basically because, again, depends a lot on context and industry and niche, so I wouldn’t worry too much about it. I think generally making your website faster for users should be an important goal and it should not just be completely ignored, which is the situation in many companies today that they’re just like, “Yeah whatever.” I think when you can get your company to shift from, “Ah whatever,” to, “Oh yeah, we need to get that done but we can’t get it done until June,” that’s the milestone, that’s the improvement that you want to have, you want to have a commitment to make things better and you want to be the one who said, “Hey, this is going to be a factor in rankings so don’t be surprised if we are seeing some changes in ranking.”
Martin Splitt:
Martin Splitt:
Loren Baker:
Yeah. Not to sound like a broken record, but for the users and for conversion is, I mean for me, from a consulting perspective, that’s always really helped. I’m going to drop a link in here right now, it’s basically a cloud flare study that looked at page performance, speed performance and conversion rates, right? So it’s very easy for folks in SEO to stick within our little SEO bubble and think that this is only something that helps with Google ranking or it’s a tiebreaker at the end of the day or whatever. But the fact is that all of us within SEO, whether we like it or not, we’re in charge of one of many sales and lead generation tools, right? So if we can make the case to make this change, whether it’s by June, or like whether Martin said, six-month plan, 12-month plan, whatever, don’t freak out, but let’s let’s make this plan, we can improve things across the board, right? So one good way to sell this internally is not necessarily to say, “You need to make this because if you don’t we’re not going to increase rankings 100%.” But it’s to pile on all the other all the other benefits, sorry.
Martin Splitt:
This keeps reminding me of this cartoon where there is a climate scientist on the stage and this is how we can improve nature and ecology and the air quality and reduce pollution and reduce our reliance on non-renewable energies, and someone in the audience gets up and says, “But professor, what if we make all these improvements and then the world wouldn’t end otherwise anyway?” It’s like, “Ah.” Yeah, why accidentally make the world a better place, it’s kind of like a weird question right? It’s like you’re making things better for your users, that’s never not going to pay off in some form.
Loren Baker:
Yeah, so there’s going to be benefits even outside of SEO.
Martin Splitt:
Yeah.
Loren Baker:
So once you make the case and get this implemented internally with the developers, and the PPC team shows up to a meeting and starts bragging about how conversions have increased, make sure that you plant that seed to let everyone know that there’s going to be better PPC conversions, if they’re utilizing the main site for the landing pages, there’s going to be better social media conversions, better email conversions, better direct traffic conversions, which mostly is a search anyway, and better conversions across the board, right? So definitely get that in from an internal selling perspective.
Martin Splitt:
Exactly.
Loren Baker:
Because promising a ranking change as a result of this is not necessarily going to be guaranteed, but promising a user a better user experience and then the ability to actually convert better and especially if that chat button’s not taking up someone’s entire phone. Who really wants to chat with the company anyway when they’re making a purchase decision? I find that most of the time that just slows down the entire process. So when I walk into the Apple Store, I don’t want to have a conversation with somebody that’s working there, I just want to buy and get out of there. So anyway, Martin it’s getting late where you are, thank you so much for jumping on. I hope you really enjoy your day of diving tomorrow and this is going to be cold water diving, right?
Martin Splitt:
Yes, correct. I think the water is 11 degrees celsius.
Loren Baker:
All right, so watch out for the catfish and let us know if find anything cool there on the bottom.
Martin Splitt:
Will do.
Loren Baker:
It’s been a pleasure everybody, we’re going to be following up probably in a couple of different SEJ posts with everything, so we’re about to sign off but thanks so much and looking forward to seeing all this roll out in mid-June.
Martin Splitt:
Awesome, looking forward to seeing all of your wonderful faces and smiles again soon and thanks a lot for having me Loren.
Loren Baker:
You’re welcome Martin, thanks for everybody for tuning in. This is Loren Baker and Martin Splitt with SEJ Show, signing off. Cheers.
Google Announces Search Redesign Using Mum Algorithm
Google announced that MUM will be integrated into some searches on Google Search. Google’s search results page is undergoing changes that will introduce new ways to discover and explore topics for certain searches.
This new way of searching expands on the old way of searching for answers and introduce a more intuitive way to explore topics.
Google will in the coming months guide users down topic paths in a redesigned search experience for some searches that will be more visual.
One of these changes is apparently already in search.
The Google MUM AlgorithmEarlier in 2023 Google introduced a new search algorithm that can search across languages and with images to find answers to complex questions.
This algorithm is called MUM, which is short for Multitask Unified Model.
The feature that Google is running with most is the ability to search with images instead of just text.
Google has already announced the integration of MUM into their Lens app in the coming months.
Related: Research Papers May Show What Google MUM Is
How Google MUM Will Change SearchThe MUM algorithm will be introduced into Google Search and it will bring dramatic changes to the whole idea of what search means.
Users currently search for answers. But this new way of searching will help users explore more complex tasks while also introducing an entirely different way of presenting answers, particularly with images.
The goal for these changes is to make searching more intuitive.
Google Search RedesignGoogle characterized this change as a redesign of search and that’s exactly what it is.
This new version of search takes three big steps away from the old ten blue links search engine.
MUM is Changing Search in Three Important Ways:
Things to know
Topic zoom
Visually browsable search results
Related: Google MUM is Coming to Lens
What is Google’s Things to Know?The “things to know” feature incorporates Google MUM to understand all the ways users explore a topic. Google provides the example of the keyword phrase, “acrylic painting.”
Google notes that there are over 350 topics associated to that keyword phrase.
The “things to know” feature will identify the most relevant or popular “paths” that users take in exploring that topic and surface more website content that relates to that.
So rather than waiting for users to conduct follow up searches, Google search will anticipate those related topics and surface the content.
This is how this new search feature works:
“For example, we can identify more than 350 topics related to acrylic painting, and help you find the right path to take.”
Video Animation of “Things to Know” Feature Topic Exploration in Google SearchThe second new feature is referred to as Topic Zoom. This feature allows a searcher to jump in and out of related topics.
A searcher can broaden the topic or zoom in to a more granular sub-topic.
Visual ExplorationThe third new MUM feature is one that appears to be available now in Google Search.
Visual Exploration is a new way visual way to explore a topic.
This new feature will not show up for all searches.
It is restricted to searchers where the user intent is to find inspiration.
Google explains it like this:
“This new visual results page is designed for searches that are looking for inspiration, like “Halloween decorating ideas” or “indoor vertical garden ideas,” and you can try it today.”
How Google MUM and AI Are Changing SearchThese new ways of searching appear to be designed to help searchers discover more websites and more web pages on the web.
Rather than limit users to the ten blue links, Google is making it easier for searchers to explore topics on many more websites than the old ten blue links way.
This should be good news for publishers.
How do you SEO for this? I suspect that nothing is going to change on Google’s end as far as search optimization.
The traditional way of doing things where title tags and headings are used for dumping high traffic keywords in the Keyword-1, Keyword 2 style of SEO may have to be revisited, but that’s been the case for several years.
It may be useful to follow Google’s guidelines for titles and headings by using them to actually describe what the web page and the web page sections are about.
Making a web page easy to understand is one of the core attributes of good SEO.
Citation Read Google’s Announcement on MUM in Google SearchHow AI is Making Information More Useful
Google Broad Core Updates And Why Some Health Sites Affected
Google’s John Mueller has stated that Google’s broad core updates have not been targeting health sites. But there is a perception that some health related sites tend to be sensitive to Google updates. What kinds of changes can affect health sites while not specifically targeting health sites?
User Satisfaction Metrics Rank Brain and Neural MatchingOver the past few years Google introduced Neural Matching and Rank Brain to help Google better understand search queries (neural matching) and to help Google understand web pages better by matching pages to concepts (rank brain).
In my opinion, a better understanding of what users mean when they ask a query could affect health related sites. Health topics can be divided between strictly scientific meanings and alternative and so-called natural cures.
Thus, if Google better understands that a query requires a scientific response, then it makes sense that sites promoting non-medical alternative solutions will suffer.
It’s not that Google is targeting health sites, but that Google is getting better at understanding what users want and are satisfied with when they make these kinds of queries.
The Mercola managed to sail through the 2023 Google broad core updates, even though it offered the same kind of “alternative” health information that other losing sites offered.
That points in the direction that an additional signal was added or possibly that other signals were dialed down.
Even if your site is not in the health niche, it may be useful to read the conversation about health sites and traffic losses. Whatever is affecting them could be affecting your sites as well.
Dr. Pete Meyers on Health Sites and Traffic LossesI asked Dr. Pete why health sites tend to keep being affected.
Here is what he offered:
“(1) There’s clearly a correlation between sites impacted in later core updates and the original core update. It seems logical that the levers that Google pulls in a “core” update are going to be qualitatively different than the levers they pull in more routine updates (even if we don’t know what those levers are), so there’s going to be a connection between them.
(2) It seems very likely that any given core update is imperfect and successive core updates will iterate on it. The data we’ve seen matches that assumption, to some degree. That doesn’t mean Core Update #5 is going to reverse Core Update #4, but we can expect that some changes won’t measure up to Google’s expectations and they’ll work to mitigate and refine those changes.
(3) Do we know for a fact that the update didn’t target health sites? I find Google’s language — while often accurate — to be very precise (almost to a fault). I believe that Google wasn’t hand-targeting specific medical sites, but we know that YMYL queries, for example, are very important to them. It’s possible this is even broader — mechanisms, for example, that try to analyze trust in verticals where trust is especially important (or where untrustworthy information is dangerous). Does that mean they “targeted” health sites? No, but they didn’t not target health queries 🙂
(4) Related to #3, something in this article (Google Tweaked Algorithm After Rise in US Shootings) struck me as very interesting:
“In these last few years, there’s been a tragic increase in shootings,” Nayak said. “And it turns out that during these shootings, in the fog of events that are unfolding, a lot of misinformation can arise in various ways.
And so to address that we have developed algorithms that recognize that a bad event is taking place and that we should increase our notions of ‘authority‘, increase the weight of ‘authority‘ in our ranking so that we surface high quality content rather than misinformation in this critical time here.”
That almost makes it sound like authority is situational; in some cases, Google isn’t going to require high authority, since it’s not necessary or not risky. But in other cases they’re going to set a high authority threshold. Note that ‘authority’ here could mean something more akin to trust/expertise than link equity.”
I followed up on Pete’s response saying that the important question, which he addressed and I agree on, is what factors? Authority? Truth?
Here is how Pete answered:
“Yeah, that’s the kicker — How has Google actually translated these ideas into code? Generally speaking, do I think E-A-T is a good idea? Absolutely. You should build Expertise, Authority, and Trust, if you want to build a legitimate business/career. That’s going to be good for marketing, and at least indirectly good for SEO. Does that mean E-A-T is specifically built into the algorithm? No. If E or A or T are built in (which is likely, to some degree), it also doesn’t tell us how that translates into factors.
Of course, Google doesn’t want us to have that granular information that could be gamed.”
Cyrus Shepard on Why Health Sites May Be Sensitive to UpdatesCyrus Shepard contributed several thoughtful ideas about why health related sites seem to be sensitive to Google’s broad core algorithm updates:
“I suspect for YMYL queries, Google is tightening the screws on less reputable sites in 1 of 3 ways:
One of the top sites hit, chúng tôi has a ton of negative articles written about it. Because it’s in the health space, Google may be extra sensitive to this sentiment.
Evidence is scant, but it seems Google may be favoring sites with links closer to a trusted seed set. See Bill Slawski’s writeup of Google patents in this area.
Finally, for YMYL queries, Google may be demoting sites that it sees as dangerous if they disagree with standardized “facts” — such as those obtained from entity graphs. Sites such as Diet Doctor (promotes fasting) and Dr. Mercola (promotes anti-vax theories) disagree with conventional medical wisdom, and could thus be demoted.
In reality, it could be one of these factors, or a combination of all three. Regardless, it’s obvious Google is moving towards presenting a standardized set of information from authoritative sites for YMYL queries.
SEO Signals Lab Facebook Group OpinionsI asked Steve Kang, the admin of the popular SEO Signals Lab Facebook Group (only members can see discussions) to ask members about this topic. A lively discussion ensued.
Verified Facts and Negative SentimentA member suggested that medical information is factual and can be cross referenced for validity by published research and regulatory warnings sent by organizations like the FDA to web publishers.
This is what that person in the Facebook group said:
“Health/health care is 1/6 of the economy and deals with critical life-and-death issues. So while there are huge opportunities for fraud or quackery there is also massive amounts of research coupled with massive regulatory oversight.
It’s a simple matter of “Stay in your lane”… you want to talk about acupuncture for pain management? Fine, because this is something that credentialed medical professionals and orgs will discuss. You start talking about acupuncture for depression, it’s bye-bye.”
Crackdown on Fake Information?“With governments working to assign accountability to Facebook, Google and et al for fake news, so-called hate postings, etc., tech companies are motivated to avoid liability.
Lawsuits in the health industry offer some of the largest payouts making it a magnet for greedy lawyers and impetus for Google to avoid exposure.
Unless you’re the Cleveland Clinic, Johns Hopkins, Mayo Clinic or an accredited provider, earning authority from Google won’t be easy.”
TakeawaysThere are many possible reasons why health sites tend to be sensitive to Google broad core updates. Factors such as what users want to see when they type a query, factual correctness of information and sentiment analysis can all play a role. But we don’t know that as facts.
What is known is that Google has not been targeting health sites. So this means that the changes may affect a broad range of sites, not just health related sites. It may be useful to investigate why some health sites are losing traffic because that may give clues as to what is affecting some non-health websites.
Images by Shutterstock, Modified by Author
Insertion Sort: Algorithm With C, C++, Java, Python Examples
What is Insertion Sort?
Insertion sort is one of the comparison sort algorithms used to sort elements by iterating on one element at a time and placing the element in its correct position.
Each element is sequentially inserted in an already sorted list. The size of the already sorted list initially is one. The insertion sort algorithm ensures that the first k elements are sorted after the kth iteration.
Characteristics of Insertion Sort AlgorithmThe Algorithm for Insertion Sort has following important Characteristics:
It is a stable sorting technique, so it does not change the relative order of equal elements.
It is efficient for smaller data sets but not effective for larger lists.
Insertion Sort is adaptive, which reduces its total number of steps if it is partially sorted. Array is provided as input to make it efficient.
How does Insert Operation work?
In Insertion Sort algorithm, the insert operation is used to sort unsorted elements. It helps to insert a new element into an already sorted list.
Pseudocode of insert operation:
Consider a list A of N elements.
A[N-1] is the element to be inserted in the sorted sublist A[0..N-2]. For i = N-1 to 1: if A[i] < A[i-1], then swap A[i] and A[i-1] else StopIn the above example, a new element 6 is to be inserted in an already sorted list.
Step 4) We compare A[1] and A[2], and as A[1] < A[2], i.e., the left adjacent element is no longer greater. Now we conclude that 6 is inserted correctly, and we stop the algorithm here.
How the Insertion Sort WorksThe insert operation discussed above is the backbone of the insertion sort. The insert procedure is executed on every element, and in the end, we get the sorted list.
The above example figure demonstrates the working of insertion sort in data structure. Initially, only one element is there in the sorted sublist i.e., 4. After inserting A[1] i.e., 3, the size of the sorted sublist grows to 2.
C++ Program for Insertion Sortusing namespace std;
int main(){ int unsorted[] = {9,8,7,6,5,4,3,3,2,1};
int size_unsorted = sizeof(unsorted) / sizeof(unsorted[0]);
cout << “nUnsorted: “; for(int i = 0 ; i < size_unsorted ; i++){ cout << unsorted[i] << ” “; }
int current_element,temp;
for(int i = 1; i < size_unsorted; i++){ current_element = unsorted[i]; temp = unsorted[j+1]; unsorted[j+1] = unsorted[j]; unsorted[j] = temp; } }
cout << “nSorted: “; for(int i = 0 ; i < size_unsorted ; i++){ cout << unsorted[i] << ” “; }
return 0; }
Output:
Unsorted: 9 8 7 6 5 4 3 3 2 1 Sorted: 1 2 3 3 4 5 6 7 8 9 C Code for Insertion Sortint main() { int unsorted[] = {9,8,7,6,5,4,3,3,2,1};
int size_unsorted = sizeof(unsorted) / sizeof(unsorted[0]);
printf(“nUnsorted: “); for(int i = 0 ; i < size_unsorted ; i++){ printf(“%d “, unsorted[i]); }
int current_element, temp;
for(int i = 1; i < size_unsorted; i++){ current_element = unsorted[i]; temp = unsorted[j+1]; unsorted[j+1] = unsorted[j]; unsorted[j] = temp; } }
printf(“nSorted: “); for(int i = 0 ; i < size_unsorted ; i++){ printf(“%d “, unsorted[i]); }
return 0; }
Output:
Output: Unsorted: 9 8 7 6 5 4 3 3 2 1 Sorted: 1 2 3 3 4 5 6 7 8 9 Python Program for Insertion Sort #unsorted list unsorted = [9,8,7,6,5,4,3,3,2,1] #size of list size_unsorted = len(unsorted) #printing unsorted list print("nUnsorted: ", end="") for i in range(size_unsorted): print(unsorted[i], end=" ") for i in range(1, size_unsorted): current_element = unsorted[i] j = i - 1 #swapping if current element is lesser unsorted[j+1], unsorted[j] = unsorted[j], unsorted[j+1] j -= 1 #printing sorted list print("nSorted: ", end="") for i in range(size_unsorted): print(unsorted[i], end=" ")Output:
Unsorted: 9 8 7 6 5 4 3 3 2 1 Sorted: 1 2 3 3 4 5 6 7 8 9 Properties of Insertion SortHere are important properties of Insertion Sort:
Online: Insertion sort can sort elements as it receives. That is, if we have already sorted a list of elements and append some more elements to the lists, then we do not need to run the entire sorting procedure again. Instead, we need to only iterate on newly added elements.
In-place: The space complexity of insertion sort algorithm is constant and does not require extra space. This algorithm sorts elements in place.
Stable: In insertion sort, we do not swap the elements if their values are equal. For example, two elements, x, and y, are equal, and x appears before y in unsorted lists, then in the sorted list, x will appear before y. This makes insertion sort stable.
Adaptive: A sorting algorithm is adaptive if it takes less time if the input elements or subset of elements are already sorted. As we discussed above, the best running time of insertion sort is O(N), and the worst running time is O(N^2). Insertion sort is one of the adaptive sorting algorithms.
Complexity of Insertion Sort Space ComplexityThe insertion sort doesn’t require extra space to sort the elements, the space complexity is constant, i.e., O(1).
Time ComplexityAs insertion sort iterates each element simultaneously, it requires N-1 passes to sort N elements. For each pass, it may make zero swaps if the elements are already sorted, or swap if the elements are arranged in descending order.
For pass 1, the minimum swaps required are zero, and the maximum swaps required are 1.
For pass 2, the minimum swaps required are zero, and the maximum swaps required are 2.
For pass N, the minimum swap required is zero, and the maximum swaps required are N.
The minimum swap is zero, so the best time complexity is O(N) for iterating N passes.
Total maximum swaps are (1+2+3+4+..+N) i. N(N+1)/2, the worst time complexity is O(N^2).
Here is the important time complexity of insertion sort:
Worst Case Complexity: O(n2): Sorting an array in descending order when it is in ascending order is the worst-case scenario.
Best Case Complexity: O(n): Best Case Complexity occurs when the array is already sorted, the outer loop runs for n number of times whereas the inner loop does not run at all. There is only n number of comparisons. So, in this case, complexity is linear.
Average Case Complexity: O(n2): It happens when the elements of an array occur in the jumbled order, which is neither ascending nor descending.
Summary:
Insertion sort is a sorting algorithm method that is based on the comparison.
It is a stable sorting technique, so it does not change the relative order of equal elements.
On every element, the insert operation is used to insert the element in the sorted sub-list.
Insertion sort is an in-place sorting algorithm.
The worst and average time complexity of insertion sort is quadratic, i.e., O(N^2).
Insertion sort does not require any auxiliary space.
How To Improve WordPress Password Security With Bcrypt Algorithm
When we talk about password security, we often refer to the strength of your password and whether it can be easily guessed by hackers. However, one aspect of password security that few people talk about is how the password is stored in the database. In WordPress each password is usually salted and passed through MD5 hashing before it is stored in the database. It seems fine and secure until you find out that the MD5 algorithm is known to suffer from extensive vulnerabilities. According to CMU Software Engineering Institute, MD5 is essentially “cryptographically broken and unsuitable for further use.”
So what can you do to improve your WordPress password security? The answer is using bycrpt algorithm, particularly with the wp-password-bcrypt plugin.
bcrypt is based on the Blowfish cipher and is an adaptive function. This means that over time the iteration count can be increased to make it slower, so it remains resistant to brute-force search attacks even with increasing computation power.
Luckily, even if you are not technically competent, you can easily upgrade your WordPress system to replace MD5 hashing with the bcrypt algorithm.
2. Extract the zip file and open the extracted folder. All you need is the “wp-password-bcrypt.php” file.
3. With your FTP program (or cPanel) connect to your WordPress server and create a “mu-plugins” folder under the “wp-content” folder. This is also known as the “Must Use Plugins” folder, and all plugins placed in this folder are automatically activated. If the “mu-plugins” folder already exists, ignore this step.
4. Upload the “wp-password-bcrypt.php” file to this “mu-plugins” folder, and you are done.
What the “wp-password-bcrypt” plugin does is re-hash the password using bcrypt and store it in the database whenever a user logs in to the system. There is no configuration required, and everything simply works in the background. Do also note that if your site has a lot of inactive users who have not logged in for a long time, their passwords will still be using the MD5 hash.
Lastly, to uninstall the plugin, all you have to do is delete it from the “mu-plugins” folder. There are no negative consequences, and everything will continue to work as usual.
ConclusionIt is completely useless for users to do everything they can to protect themselves if the system is insecure in the first place. By switching to using bcrypt algorithm, you can quickly and easily improve your WordPress password security and prevent your user account from being easily crackable (assuming they are using a strong password as well).
Image credit: Linux password file
Damien
Damien Oh started writing tech articles since 2007 and has over 10 years of experience in the tech industry. He is proficient in Windows, Linux, Mac, Android and iOS, and worked as a part time WordPress Developer. He is currently the owner and Editor-in-Chief of Make Tech Easier.
Subscribe to our newsletter!
Our latest tutorials delivered straight to your inbox
Sign up for all newsletters.
By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.
Update the detailed information about Recovering From A Google Core Algorithm Update With Lily Ray on the Eastwest.edu.vn website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!