You are reading the article Meet The Armored Soldiers And Nuclear Artillery Of A Future War That Never Was updated in November 2023 on the website Eastwest.edu.vn. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 Meet The Armored Soldiers And Nuclear Artillery Of A Future War That Never Was
The first half of the 20th century saw war unlike any that had transpired before. Elements were the same: people still fought over ideas and land, and it was still infantry on foot and civilians that did most of the dying. But the weapons! Fantastical, horrific weapons, like the machine guns that turned trench warfare from protracted stalemate to meat grinder, and fighters and bombers that burned through the skies. Or the armored tanks, which lumbered into history in the Western Front and then defined history from 1939 to 1945, changing centuries of prior thinking on how best to seize victory. From the vantage point of the middle of the 20th century, the coming decades of war seemed almost certain to be a new bloody spectacle, powered by technological marvels.
And then there is the question of tactical nuclear weapons.
M-388 Davy Crockett nuclear weapon
The Davy Crockett was the smallest nuclear weapon ever fielded by the United States.
“The only thing [Rigg] got wrong in a tactical concept sense is that we would use nukes in a tactical way,” said Brett Friedman. Friedman is veteran, who now works as a civilian wargame analyst at the Marine Corps Warfighting Lab. His new book “On Tactics” examines continuity between how battles are fought over centuries. (And, in the interest of full disclosure, Friedman and the author co-founded Grand Blog Tarkin together). “A lot of people in the 1950s though that we would use nukes the way we use artillery.”
At the same time that Rigg argued that the infantryman still had a role on the battlefield, he had to contend with the growing nuclear arsenals in the world. The only nuclear weapons ever used in war were bombs, designed to blow up in a big way and annihilate cities in a single blast. In the late 1940s and throughout the 1950s and early 1960s, American military planners explored much smaller nuclear weapons, like the jeep-mounted Davey Crockett to see if a sudden deadly burst of radioactive death and fallout was the specific punch they needed to win a seemingly inevitable land war in Europe. No country has used a nuclear weapon in battle since August 1945, but for decades it appeared as a near-certainty that it would happen again, and with some regularity.
To get around these new and deadly radioactive hotspots, Rigg looks to a plethora of aerial vehicles, everything from nuclear-powered airborne aircraft carriers to light flying platforms transporting 15 soldiers and a driver at a time. Swarms of these platforms would move across a target country in a sort of “aerial blitzkrieg,” seizing airfields and taking over or destroying missile silos, and then retreating when enemy reinforcements arrived. While the Army is still working on hover cars (like it has been since the 1950s), there was another machine that could do the job, and had, by the time Rigg put together “Futurarmy”: helicopters.
“Marines started experimenting with helicopters right after World War II, and executed first vertical envelopment in history in Korea in 20 September 1951,” said Friedman, referring to the use of helicopters to move Marines around the enemy in Operation Summit. Helicopter maneuvers in Korea may have been on Rigg’s mind as he wrote “Futurarmy,” but it’s Vietnam that showed the true utility of transporting troops by the sky and then putting them on the ground.
“In Vietnam, both the Army and the Marine Corps used vertical envelopment as their tactic of choice,” said Friedman. “It was easier to get around Vietnam in a helicopter than in a convoy of trucks, or armored column, or even foot. [Rigg] was right that we started to use technology to assist the mobility of our forces, and the army especially ran with that in Vietnam.”
Part of Rigg’s vision for the soldier of the future was fighting alongside a whole, combined arms invading force, with infantry backed by supporting fire from close-by rocket launchers to long range ballistic missiles. Flying overhead would be the Centaur, a sort of “aerial artillery,” a kind of fighter plane focused entirely on blowing up targets on the ground, at the behest of the infantry.
Marine Corps helicopter in the Korean War
Taken on September 20th, 29151, this photograph shows Marines taking Hill 812, believed to be the first use of helicopters for a “vertical envelopment.”
“He got the idea of close air support exactly right,” said Friedman, “once the technology got to where troops on the ground could talk to the pilot in the aircraft via radio, that’s when you saw the coordination between the two, leading to his vision of aircraft supporting ground troops. He got the technology a little weird with his Centaur, but the modern Centaur is the A-10 warthog. It is flying artillery.”
Rigg even imagined something like the drones of the future battlefields, writing about mechanical sentries recording troop movements and transmitting them by radio, or small flying bird-like robot scouts that can tell soldiers where enemies are nearby. Yet there is much Rigg gets wrong, from emergency rations condensed into flakes the size of a fingernail, to a general trend that equipment would be light enough that soldiers could travel into battle unburdened with heavy packs. And it all takes place in the context of massive superpower wars, with tactical nuclear weapons blasting around, a development fortunately not yet seen.
So what are we to make of this strange vision of the wars of tomorrow? As much as we are to make of modern insurgent groups boasting about reinventing century-old weapons: ideas about war are products of the technologies and tools of their time, but the general thrust of history is still people, with weapons they can carry, trying forever to attack enemies at their weakest.
Still, given that war both always changes and never changes. Friedman offered his own guess at what new, strange sight we’ll see on the the next decades battle
“My prediction for the battlefield is that we will see coders on the battlefield. In the sense of people who can write software on the fly in the battlefield. Commanders are going to be commanding as many machines as they are humans, and they’ll want those machines to be responsive to the mission, to their intent, to what they want to do on the battlefield. There’s not going to be time for a software update to enable a UAV or your ground robot to do a certain thing or go a certain route. Commander’s going to want that on the spot, that’s going to lead to probably men and women in uniform that know how to code, know how to get into the code and manipulate the software, on the battlefield itself, in order to meet that commander’s intent.”
You're reading Meet The Armored Soldiers And Nuclear Artillery Of A Future War That Never Was
According To Genomic Pioneers, The Future Of Genetics Is The Future Of Computing
The journal Nature today released a massive retrospective on the tenth anniversary of the Human Genome Project (officially celebrated June 26 of this year), which included two important pieces from genomics pioneers J. Craig Venter and Francis Collins. While retrospectives generally look backward, Venter and Collins are already looking to the next decade, one filled with free-flowing information, reams of phenotype data and multiple genomes per person. But the biggest developments in genomics won’t be a genomics development at all; it will be the biggest, baddest computing systems the world has ever seen.
Genome Data will be Free
The race to sequence the first human genome produced some fantastic things, not least of which was the first human genome. Ongoing prizes like the Archon X-Prize continue to offer research groups and academics the incentives to push the technological and scientific envelopes toward greater innovation. But cooperation, not competition, will get genomics to where it needs to be.
Collins notes that while legally binding policies must be in place to ensure individual privacy, genome data must be made available to all. Genomics is simply too big to end up like Big Pharma, with each individual entity clinging tenaciously to its proprietary data sets like commodities. The original Human Genome Project established an ethic of immediate data deposit that allowed others access to its data. That kind of openness and inclination toward collaboration will characterize the future of genome research.
Phenotype is the New Genotype
We’ve figured out how to sequence the genome, but that was only the beginning. Now we’ve got to figure out what it all means, and that means phenotype — behaviors, environmental factors, physical characteristics, etc. — will become just as important genotype in determining what the genome really means. And while phenotype may seem easier to characterize than genotypes, the task is actually far larger.
The vast complexity of human clinical and biological data is not easily digitized. As Venter notes, a query like “are you diabetic?” is simple enough to answer with a yes or no, but that one query raises many more: age, diet, medication, family history, vascular health, environment, etc. Only by pulling all that data into one place can we really begin to use the genome to revolutionize medicine. Which means . . .
The Next Big Genomics Breakthrough is Actually a Computing Breakthrough
Say we had all the genotype data and phenotype data we ever wanted. Without a means to process, analyze and cross-reference all that information, we would simply be floating on a sea of base pairs and phenotype data with no practical means of navigation. “The need for such an analysis could be the best justification for building a proposed ‘exascale’ supercomputer, which would run 1,000 times faster than today’s fastest computers,” Venter writes. Such mechanisms could unlock a future not where each person has access to his or her own genome, but to several genomes taken from various cell types within their bodies.
Collins agrees, emphasizing that there’s no substitute for good old fashioned elbow grease; large-scale research projects tediously logging reams of data and technological breakthroughs that allow us to make use of all that information will be the driving forces behind the next great strides in genetic research.
Graduate research assistants and computer scientists, sharpen your pencils. The future of genetics, it turns out, is in your hands.
The Airport Of The Future Isn’t An Airport: It’s A City
Customer service complaints, wasted hours, and extremely uncomfortable seats: Frequent and occasional flyers alike have a lot of problems with the status quo of air travel.
One of the worst examples of a frustratingly behind-the-times industry is New York’s La Guardia airport, a fixture of U.S. air travel since its official opening in 1953. La Guardia’s recently announced redesign may make accessibility easier and security faster, but widening existing bottlenecks in an inferior process may not be enough to keep up with the current progress in airport design. So what does the ideal airport of the future look like? It’s more complicated than a rudimentary set of improvements. Airports of the future may see changes not just in security procedures and technology, but in purpose within the communities they serve.
Let’s be clear: the United States has some awful airports, and La Guardia is consistently complained about. Lionel Ohayon, CEO and founder of the design group ICrave says that America has considerably worse airports because we’ve had airports longer than many other counties, and the age is showing in our designs. “We kind of have the worst airports in the world because we went through this 85 years ago when everyone was building airports in America. And now you have places where all this new stuff is happening.”
With that in mind, we asked experts in airport design and infrastructure where the future of air travel lies, and they saw an entirely redesigned user experience, and pointed to several key factors in making airports financially independent, user friendly, and reliably efficient.
“When we look to the passenger airport of the future, you kind of have to look at the passenger and the technology that will be made available,” says TJ Schulz, president of the Airport Consultants Council, a consulting firm for private businesses that have a stake in airport development. “I think what we’re going to see are attempts to really facilitate and automate the passenger processing experience.” He thinks getting through the checkpoint and check-in desks could be a thing of the past as self-serving kiosks can easily take their place.
Ohayon has focused particularly on one frustrating routine: bag checking. The process of carrying, dropping-off, checking, and picking up a bag could at least partially be done by the passengers themselves, which would streamline the experience, he says.
Schulz believes we’re already seeing some automation of bag check procedures around the country, and airlines are going to keep seeking to cut out extra steps, like the agent. ”There will be opportunities for passengers to check their own bags rather than have to go through an agent at a desk,” he says.
None of this is confirmed or denied for La Guardia, however, as all of these efficiency increases are still essentially pipe dreams for American airports. La Guardia, unlike abstract conceptual airports of the future, has to be functional under the rules of current security procedures.
But the ability to be flexible is important. Both Schulz and Ohayon point to a rearrangement of the process, where security happens at the gate, not the entrance to the terminal. Schulz speculates that eventually passengers will even be self boarding, a process that is already in use at more than 30 airports across Europe. For the moment, though, many of them, including Air France and other airlines at Paris Charles de Gaulle, France’s largest international airport, still have attendants that stand by to help passengers through the various menus.
For self-boarding to occur, that requires more identification ahead of time for most passengers. “I think the biggest constraint to the desire for more automation is the security checkpoint,” says Schulz. “I think the TSA has a vision of having a sizable segment of the traveling population already vetted, so that they’re known entities, and offer, for the most part, no threats to the system.”
Already, LaGuardia’s sister airport, JFK, has made some of these improvements. Its new Delta terminal houses several iPad-based restaurant sites that allow you to sit and order food while checking flight times and surfing the web. Many airports around the world now allow you to check in and print boarding passes without meeting an agent face to face, unless you’re checking a bag. European airports are going so far as to add bag tag printers to the self check-in kiosks, which mean by the time you hand your bag off to an agent, all that’s left to do is check the weight and your ID. From a more creative side, Tokyo’s Narita airport’s new terminal is designed like an indoor track and its color coded according to the passengers’ desired direction: blue pathways for departures and red pathways for arrivals.
But beyond the security screening procedures and infrastructure that will create better access to the airport itself, the most important piece of the puzzle is commerce.
Those in the private sector of airport development agree that a better user experience means more comfort, which will include making both shopping and entertainment more available. That means the airport has to be designed less like a cattle processing line, and more like, well, its own city.
“I think city planning is a good place to start,” says Ohayon. “The airport as a city is something you’re going to see more and more thinking around.” He says that if you think about how much time people spend in airports and how much travel has become a part of our modern daily lives, the idea and purpose of what an airport is, “becomes redefined.”
Travelers frequently spend extra hours in airports, sometimes even overnight, when their flights are delayed or cancelled. So the potential for an in-airport hotel at La Guardia is promising, but the commerce and retail spaces that the new La Guardia airport would house have a lot to live up to if they want to be future-proof, and be able to support the airport itself.
“Airports are trying to do more to build a revenue base that’s not reliant on the federal government,” says Schulz, “and also not be so reliant on airlines that can be at your terminal and gone the next.” The way you will see it, he says, is through the whole airport experience. Much of that comes in the form of retail therapy for all flyers.
“Without fail you’re seeing much nicer, well portioned shopping opportunities, [and] much better quality restaurants that really speak to the local flavor of the area,” says Schulz. “They’re looking for opportunities for people waiting at gates to hop on an iPad and be able to order food from a restaurant down the terminal that’s going to be delivered to them.”
Ohayon says we’ve seen inklings of this model in other brick and mortar user experiences already. “If you think of a movie theatre…when I was a kid before you got into the theater there was a ticket ripped at the front door and if you wanted popcorn you had to be actually going to a movie. Now movie theaters are moving the ticket rip all the way back to the door of the theater itself,” he says. This allows them to open up their space to the public, moviegoers and non-moviegoers alike, and capitalize on that real estate. “They’re making a social kind of connection to all that space that they have,” he says. “You’ll be seeing the same things happen at airports.” The airport security will happen at the gate and all of those amenities will exist in a public domain.
“Airports want to become a destination,” says Schulz. “They want to see people travel out to the airports to go shopping, and we’re seeing that internationally. The middle east, Dubai, and others, they just have wonderful shopping.”
It’s a big leap in a city like New York to expect people to commute to the airport for retail, and likely not an honest concern for La Guardia, which is a significant distance from city center. “Many of us just want to get to where we want to go,” says Schulz. However, the efforts by airports to become more self reliant could likely be beneficial. That means that even if you’re not using an airport like a mall, you’ll still see improvements in the coming years to your own flying experience. A more self reliant airport can drop prices for airlines and keep more flights coming in and out.
But the idea that the thousands of arriving and departing passengers that frequent large airports per hour will find the experience a little more comfortable is a positive improvement.
Was The Patriot Front March In Boston A Sign Of The New Kkk?
Was the Patriot Front March in Boston a Sign of the New KKK?
The white nationalist hate group Patriot Front marching through parts of Boston on July 2, 2023. CAS historian Katie Lennard says the surprise act was meant to be a “visual threat.” Photo by Stuart Cahill/MediaNews Group/Boston Herald via Getty Images
History & Politics
Was the Patriot Front March in Boston a Sign of the New KKK? CAS historian on how the hate group’s July 2 pop-up protest does—and does not—resemble the Klan of the pastNobody saw it coming. On Saturday, July 2, around 100 members of a white nationalist hate group called the Patriot Front, with chapters in some 40 states, including Massachusetts, showed up unannounced in downtown Boston and marched along parts of the Freedom Trail. They wore white masks, navy blue shirts, and khaki pants. Some of them carried flags, some shields. And before law enforcement could react, the surprise demonstration was over and they were gone.
It didn’t take long for references to the Ku Klux Klan to start popping up on social media. But is that a fair comparison—are groups like the Patriot Front the new KKK, or something unique to this generation?
BU Today spoke with Katie Lennard, College of Arts & Sciences American & New England Studies Program inaugural Abbott Lowell Cummings Postdoctoral Fellow in American Material Culture. Lennard, whose PhD dissertation is titled Manufacturing the Ku Klux Klan’s Visible Empire, 1866-1931, has studied the history of the Klan. She is working on a book titled Manufacturing the Ku Klux Klan: Robes, Race, and the Birth of an Icon.
This interview has been edited and condensed for clarity.
Q
&
A With Katie Lennard
BU Today:
This was basically a pop-up protest, with Patriot Front members showing up unannounced, catching everyone by surprise, then disappearing before anyone could react. Is that how the KKK operated? Did people know where they would march and when?
BU Today:
So do you see this Patriot Front group as more similar to the old Klan or a newer version?
BU Today:
Isn’t it true that the old Klan was largely focused on attacking Black Americans? How is that different from what we are seeing today?
Katie Lennard: So, the 19th-century Klan was very white versus Black, and also anti-republican. Challenging Black citizenship and increased federal power in the South after the Civil War. The second Klan, in the 1920s, that’s when they got into anti-Catholicism, anti-Semitism, anti-immigration politics. They claimed to be supportive of Prohibition even though many Klan leaders were also drinking a lot. It was a deeply hypocritical organization, that goes without saying. The way the Patriot Front describes themselves is that they are protecting and upholding the American republic from all of these incursions. This language is really reminiscent of the second Klan.
BU Today:
So when people refer to what we are seeing as “the new Klan,” would you say that’s an accurate depiction of this movement?
Katie Lennard: Kathleen Belew has written the best book on these white nationalist groups, called Bring the War Home [Harvard University Press, 2023]. We go from a Klan in the 1920s that has members in every state in the country to the 1960s, when it’s a much more underground movement, and you start to have all these splintered groups taking up the name.
General view of the Ku Klux Klan on parade along Pennsylvania Avenue in Washington, D.C., on August 8, 1925. It was estimated that nearly 60,000 klansmen marched in the parade while nearly a million persons viewed the demonstration. Photo by Bettmann via Getty
BU Today:
Can we talk about that? Why do they wear masks? We know it’s not because of the pandemic. Is it just about hiding their identity or is it deeper than that?
BU Today:
Why do they want to evade identification?
We saw in Charlottesville some were publicly delighted to get press. But we also saw real consequences for some of the marchers who were identified. I think remaining masked provides a real feeling of power. Of representing this organization. They are trying to give themselves authenticity, collective power. Not being identified is one of the many vectors and a lot of it is about performance of the identity they are trying to display.
BU Today:
Public officials have expressed real concern about what happened in Boston, with Mayor Michelle Wu saying the city needs to respond to “this growing rise and trend in white supremacy and hate.” Is there anything officials can do when faced with these pop-up demonstrations?
BU Today:
In the wake of the July 2 Boston march, what worries you the most?
What feels really alarming to me about this particular group is that they are becoming emboldened by broader cultural currents. When you have them gathering in Idaho with weapons to go to a Pride parade, that feels like an escalation of planned terrorism.
Explore Related Topics:
9 Things That Are Never Admitted About Open Source
You might think that a group of intelligent people like the members of the free and open source software (FOSS) community would be free of hidden taboos. You might expect that such a group of intellectuals would find no thought forbidden or uncomfortable—but if you did, you would be wrong.
Like any sub-culture, FOSS is held together by shared beliefs. Such beliefs help to create a shared identity, which means that questioning them also means questioning that identity.
Some of these taboo subjects might undermine truisms held for twenty years or more. Others are new and challenge accepted truths. If examined, any of them can be as threatening as a declaration of shared values can be reassuring.
Yet while examining taboos can be uncomfortable, doing so can often be necessary. Beliefs can linger long after they no longer apply or have degenerated into half-truths. Every now and then, it is useful to think the unthinkable, if only so beliefs can be re-synced with reality.
With this rationale, here are nine of my observations about open source today that are overdue for examination.
When Ubuntu first emerged nine years ago, many regarded it as the distribution that would lead the community to world domination. Coming out of nowhere, it immediately began focusing on the desktop in a way that no other distribution ever had. Tools and utilities were added. Many Debian developers found jobs at Canonical, Ubuntu’s commercial arm. Developers had their expenses paid to conferences that they couldn’t have attended otherwise.
Over the years, though, much of this initial excitement has eroded. Nobody seemed to mind Ubuntu’s founder Mark Shuttleworth calling for major projects to coordinate their release cycles; they simply ignored it. But eyebrows began to rise when Ubuntu started developing its own interface instead of contributing to GNOME. Canonical started vetoing what was happening in Ubuntu, apparently not for the common good but mainly in the search for profit. Many, too, disliked Ubuntu’s Unity interface when it was released.
But listen to Canonical employees or Ubuntu volunteers talk, and you could almost imagine that the last nine years had never happened. In particular, read Shuttleworth’s blog or public statements, in which he assumes that he remains a community leader and that “the big mouths of ideologues” will eventually be silenced by his success.
Seven years ago, Tim O’Reilly stated that open source licenses were obsolete. That was his dramatic way of warning that online services undermined the intent of FOSS. Like FOSS, cloud computing offered users the free use of applications and storage, but without any controls or guarantee of privacy.
The Free Software Foundation responded to the growing popularity of cloud computing by dusting off the GNU Affero General Public License, which extends FOSS ideals to cloud computing.
The founder of the Free Software Foundation and the driving force behind the GNU General Public Licenses, Richard M. Stallman is one of the legendary figures in free and open source software. For years, he has been the most vocal defender of software freedom, and the community probably wouldn’t exist without him.
What his supporters are reluctant to admit is that Stallman’s tactics are limited. Many say he is not comfortable with people, and his arguments center on semantics—on the words chosen, and how they bias an argument.
This approach can be insightful. For example, when Stallman asks why file-sharing is equated with pirates pillaging and looting, he reveals the bias that the music and movie industry tries to impose on the issue.
But, unfortunately, this is almost Stallman’s sole tactic. He rarely moves beyond using it to castigate people, and he repeats himself even more than most people who spend their time making speeches. Increasingly, he is seen in many parts of the community as both irrelevant and embarrassing—as someone who has outlived his effectiveness.
People seem to find it hard to live with the idea that Stallman could both have a history of accomplishment and be less effective than he once was. Either they defend him fiercely because of his history, or they attack him as a wannabe who never was. I believe both his accomplishments and his current lack of effectiveness are true at the same time.
One of the main stories that FOSS developers like to tell themselves is that the community is a meritocracy. Status in the community is supposed to be based on what you have recently contributed, either in terms of code or time.
As a motivation and a source of group identity, the idea of meritocracy has powerful appeal. It encourages people to work long hours and gives community members a sense of identification and superiority.
In its purest form—say within a small project whose contributors have been working together for several years—meritocracy sometimes exists.
More often, though, it is heavily qualified. In many projects, documentation writers or artists are less influential than programmers. Often, who you know can influence whether your contributions are accepted as much as the actually quality of your work.
Similarly, the famous are more likely to influence decision-making than the rank and file, regardless of what they have done recently. People like Mark Shuttleworth or corporations like Google can buy their way to influence. Community projects can find their governing bodies dominated by their corporate sponsors, as has usually been the case with Fedora. Although meritocracy is the ideal, it is almost never the sole practice.
Another trend that undermines meritocratic ideals is the sexism—and, sometimes, outright misogyny—found in some corners of the community. In the last few years, FOSS leaders have denounced this sexism and adapted official policies to discourage some of its worst aspects, such as harassment at conferences. But the problem appears firmly embedded at other levels.
The number of women varies between projects, but 15-20 percent would be considered a relatively high number of women involved in an open source project. In many projects, the number is below 5 percent, even when non-programmers are counted.
Even compared to these low numbers, women are under-represented at conferences, except in those cases where women are actively encouraged to submit proposals—efforts that are inevitably met with accusations of special treatment and quotas, even when no evidence of such things exists.
Similar reactions, many of them far worse, can be found on many FOSS sites or IRC channels whenever a woman appears, especially a stranger. They give the lie to the claims that the community is only interested in contributions, or that the under-participation of women is simply a matter of individual choices.
Just over a decade ago, you could count on Microsoft to call FOSS communistic or un-American, or for leaked revelations of plans to destroy the community.
Much of the community still clings to the memories of those days—after all, nothing brings people together like a powerful and relentless enemy.
But what people fail to appreciate is that Microsoft’s response has become more nuanced, and it varies between corporate departments.
No doubt Microsoft’s top executives still see FOSS as competition, although the colorful denunciations have ceased.
However, Microsoft has realized that, given the popularity of open source, the company’s short-term interests are best served by ensuring that FOSS—especially popular programming languages—works well with its products. That is the basic mission of Microsoft Open Technologies. Recently, Microsoft even released a quote praising the latest release of Samba, which allows management of Microsoft servers from Linux and other Unix-based operating system.
Microsoft is not about to become an open source company any time soon or to make a disinterested donation of cash or code to the community. Still, if you ignore old antagonisms, these days Microsoft’s self-centered approach to FOSS is not greatly different from that taken by Google, HP, or any other corporation.
2012 saw a retreat from GNOME 3 and Unity, the latest major graphical interfaces. The retreat was largely a response to the perception that GNOME and Ubuntu were ignoring users’ concerns and imposing their own visions of the desktop without consultation.
The short-term effect of this retreat was the reinvention of GNOME 2 in various forms.
As the predecessor of both GNOME 3 and Unity, GNOME 2 was an obvious choice. It is a popular desktop and places few constrictions on users.
All the same, its long-term effect threatens to be a stifling of innovation. Not only is time programming the resurrection of GNOME 2 time away from exploring new possibilities, but it seems a reaction against the whole idea of innovation.
Few, for instance, are willing to admit that GNOME 3 or Unity have any useful features. Instead, both are condemned as wholes. Nor have future developments, such as GNOME’s intention to make security and privacy easier, received the attention they deserve.
The result may be that, for the next few years, innovation is likely to be seen a series of incremental changes, with few efforts to enhance general design. Developers, too, may be hesitant to try anything too different in order to avoid rejection of their designs.
I have to applaud the fact that the demands of users have triumphed in the various resurrections of GNOME 2. But the conservatism that seems to accompany it makes me worry that the victory comes at the cost of equally important concerns.
The reality is somewhat different. Examine a user poll, and you find a consistent pattern in which one application or technology has 50-65 percent of the votes, and the next one, 15-30 percent.
For example, among distributions, Debian, Linux Mint, and Ubuntu, all of which use the .DEB package format, won 58 percent of the votes in the 2012 Linux Journal’s Reader Choice Awards, compared to 16 percent for Fedora, openSUSE, and CentOS, which use chúng tôi format.
Similarly, Virtualbox scored 56 percent under Best Virtualization Solution, and VMWare 18 percent. Under Best Revision Control, Git received 56 percent and Subversion 18 percent. The most lopsided category was Best Office Suite, in which LibreOffice received 73 percent and Google Docs 12 percent.
There were only two exceptions to this general pattern. The first was Best Desktop Environment category, where the diversification of the last year was reflected in KDE receiving 26 percent, GNOME 3 22 percent, GNOME 2 15 percent, and Xfce 12 percent. The second was Best Web Browser, in which Mozilla Firefox received 50 percent and Chromium 40 percent.
Overall, the numbers fall short of a monopoly, but in most categories, the tendency is there. The best that can be said is that, without the profit motive, being less popular does not mean that an app will disappear. But if competition is healthy, as everyone likes to say, there is some cause for concern. When you look closely, FOSS is not nearly as diverse as it is assumed to be.
By 2004, FOSS had reached the point where people could do all of their consumer tasks, such as email and web browsing, and most of their productivity computing using FOSS. If you ignore the hopes for a free Bios, only wireless and 3-D drivers were needed to realize the dream of a completely free and open source computer system.
Nine years later, many of the free wireless drivers and some of the free graphic drivers are available—but far from all. Yet the Free Software Foundation only periodically mentions what needs to be done, and the Linux Foundation almost never does, even though it sponsors the OpenPrinting database, which lists which printers have Linux drivers. Given the combined resources of Linux’s corporate users, the final steps could probably be taken in a matter of months, yet no one makes this a priority.
Granted, some companies may be concerned about so-called intellectual property in the hardware they manufacture. Perhaps, too, no one wants to reverse engineer for fear of upsetting their business partners. Yet the impression remains that the current state of affairs exists because it is good enough, and too few care to reach the goals that thousands have made their lives’ work.
A few people might be aware of some of these taboo subjects already. Probably, however, there is something in this list to peeve everyone.
However, my intent is not to start nine separate flame wars. I’d have no time for them even if I wanted them.
Instead, these represent my best effort to identify the places where what is widely believed in the community needs to be questioned. I could be wrong—after all, I am discussing what I have grown used to thinking, too—but at worst, the list is a start.
Future Of Fashion Industry And Digital Transformation
How can we say that the fashion industry has a bright future with the growth of internet and digital transformation?
We can say that the fashion industry has a bright future with the growth of the internet and digital transformation due to the following reasons −
Increased Accessibility − The internet has made fashion more accessible to consumers all around the world. E-commerce platforms and online fashion retailers have made it easier for consumers to purchase fashion products from anywhere and at any time, thereby expanding the industry’s reach.
Expansion of social medi − Social media platforms such as Instagram, Facebook, and TikTok have become powerful tools for fashion brands to showcase their products and engage with their target audience. Influencer marketing has also become an effective way for fashion brands to promote their products and reach new customers.
Sustainability focus − The fashion industry is increasingly focusing on sustainability, and the internet is helping to promote and encourage this trend. Online platforms are enabling consumers to learn more about sustainable fashion and purchase sustainable products.
Changing consumer preferences − Consumers are becoming more conscious about their fashion choices and are looking for brands that align with their values. The internet has made it easier for consumers to research and learn about the brands they buy from, which has led to increased demand for ethical and sustainable fashion.
The Future of Fashion Industry with The Growth of the Internet and New TechnologiesThe future of the fashion industry looks bright with the growth of the internet and new technologies. Here are some potential developments and trends that could shape the industry in the coming years −
Increased personalization − As technology continues to evolve, fashion brands are finding new ways to offer personalized and customized shopping experiences. Advances in artificial intelligence and machine learning could help fashion brands to better understand their customers’ preferences and offer tailored recommendations.
Virtual and augmented reality − Virtual and augmented reality technologies are being increasingly used in the fashion industry to enhance the shopping experience. These technologies could enable customers to try on clothes virtually, see how different outfits will look on them, and even experiment with new styles.
Sustainability focus − The fashion industry is facing increasing pressure to become more sustainable, and the growth of the internet could help to facilitate this trend. Online platforms could help to promote and encourage sustainable fashion, making it easier for consumers to find and purchase eco-friendly products.
Emerging technologies − New technologies such as blockchain and 3D printing could also have a significant impact on the fashion industry. Blockchain technology could be used to improve transparency and traceability in the supply chain, while 3D printing could revolutionize the way clothing is produced and manufactured.
Increasing use of data − Data analytics and machine learning are already being used to understand consumer behavior and preferences, but their use could become even more prevalent in the future. Fashion brands could use data to identify emerging trends and develop new products that meet consumer demands.
Overall, the future of the fashion industry looks exciting and promising with the growth of the internet and new technologies. These developments could help to drive innovation, enhance the shopping experience, and make the industry more sustainable and socially responsible.
The Rise of Ethical and Sustainable Fashion and How the Internet Is Facilitating This TrendThe rise of ethical and sustainable fashion is a growing trend in the fashion industry, and the internet is playing an important role in facilitating this movement. Here are some ways in which the internet is helping to promote ethical and sustainable fashion −
Increased awareness − The internet has made it easier for consumers to learn about the social and environmental impact of the fashion industry. This has led to increased awareness and concern about issues such as labor rights, animal welfare, and pollution. As a result, consumers are becoming more conscious about their fashion choices and are looking for brands that align with their values.
Access to information − The internet has also made it easier for consumers to access information about fashion brands and their practices. This includes information about the materials used in clothing, the working conditions of factory workers, and the steps brands are taking to reduce their environmental footprint. This transparency is helping consumers to make more informed decisions and hold brands accountable for their actions.
Online platforms − Online platforms such as social media, e-commerce sites, and fashion blogs are providing a platform for sustainable and ethical fashion brands to showcase their products and reach a wider audience. This is helping to promote and encourage sustainable fashion and make it more accessible to consumers.
Collaborations and partnerships − The internet is also enabling collaborations and partnerships between sustainable fashion brands and other organizations. For example, sustainable fashion brands are teaming up with environmental organizations and charities to raise awareness about sustainable fashion and promote eco-friendly initiatives.
Innovation and technology − The internet is facilitating innovation and technology that can help to make the fashion industry more sustainable. This includes new materials such as recycled polyester and eco-friendly dyes, as well as new technologies like 3D printing and blockchain that can improve supply chain transparency and traceability.
Overall, the internet is playing an important role in promoting ethical and sustainable fashion. As consumers become more aware of the social and environmental impact of the fashion industry, the demand for sustainable fashion is likely to continue to grow. The internet can help to connect consumers with sustainable fashion brands and provide the information they need to make informed choices.
The Role of social media In Promoting Fashion Brands and ProductsSocial media has become a powerful tool for promoting fashion brands and products. Here are some ways in which social media is playing a role in promoting fashion −
Influencer marketing − Social media platforms such as Instagram and TikTok have given rise to a new type of marketing called influencer marketing. Fashion brands are teaming up with influencers to promote their products to their followers, who are often a highly engaged and targeted audience.
Brand awareness − social media is also helping to increase brand awareness for fashion brands. By posting engaging and visually appealing content, fashion brands can build a strong social media presence and reach a wider audience.
Trend identification − Social media platforms can also be used to identify emerging fashion trends. Brands can analyze social media data to see what types of clothing or styles are gaining popularity, which can help them to create new products that meet consumer demand.
E-Commerce − Social media platforms are increasingly integrating e-commerce features, such as Instagram Checkout and Facebook Marketplace, which allows brands to sell products directly through social media. This makes it easier for consumers to shop for fashion products and can increase sales for fashion brands.
Overall, social media is playing an important role in promoting fashion brands and products. By leveraging the power of social media, fashion brands can increase brand awareness, engage with customers, identify emerging trends, and sell products directly to consumers.
ConclusionThe fashion industry has been greatly impacted by the growth of the internet and new technologies. E-commerce and online shopping have made fashion more accessible to consumers, while social media has become a powerful tool for promoting fashion brands and products. The rise of ethical and sustainable fashion is also a growing trend, and the internet is facilitating this movement by increasing awareness, providing access to information, and promoting collaborations and partnerships.
The future of the fashion industry looks bright with the continued growth of the internet and the innovative use of technology. As fashion brands adapt to changing consumer demands and incorporate sustainability into their practices, the industry is poised to become more inclusive, transparent, and environmentally responsible.
Update the detailed information about Meet The Armored Soldiers And Nuclear Artillery Of A Future War That Never Was on the Eastwest.edu.vn website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!