Trending November 2023 # Google, Apple, Meta, Amazon & Microsoft Join To Improve Voice Recognition # Suggested December 2023 # Top 11 Popular

You are reading the article Google, Apple, Meta, Amazon & Microsoft Join To Improve Voice Recognition updated in November 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 Google, Apple, Meta, Amazon & Microsoft Join To Improve Voice Recognition

Speech recognition is used to access websites, speech translation, voice assistants and for operating devices.

But it can be difficult for voice activated devices and services to work if a user’s speech pattern is affected by Lou Gehrig’s disease, Parkinson’s disease or Down Syndrome, among other reasons.

The project aims to change that situation with the creation of a project that brings together five technology companies that can work together to solve the challenge of making speech recognition work for those with non-standard speech patterns.

The project will first work with English and then expand to other languages.

The Speech Accessibility Project website explained:

“…without diverse, representative data, ML models cannot learn how to understand a diversity of speech. This project aims to change that by creating the dataset needed to more effectively train these machine learning models.”

New Project to Advance Accessibility

The Speech Accessibility Project is a new program by the University of Illinois and five technology companies that are working together to create technology that will make voice activated technology accessible to a wider group of people.

The following companies are members of the new initiative:






The project website stated the problem they will solve:

“Today’s speech recognition systems, such as voice assistants and translation tools, don’t always recognize people with a diversity of speech patterns often associated with disabilities.

This includes speech affected by Lou Gehrig’s disease or Amyotrophic Lateral Sclerosis, Parkinson’s disease, cerebral palsy, and Down syndrome.

In effect, many individuals in these and other communities may be unable to benefit from the latest speech recognition tools.”

Solution to Speech Recognition Accessibility

The Speech Accessibility Project will collect samples of different voice patterns and create an anonymous dataset.

This dataset will then be used to create machine learning models that can better understand the variety of voice patterns that are currently underserved.

Project Euphonia

Google launched its own AI based accessibility initiative in 2023 called Project Euphonia. This project helped Google adapt speech recognition to be able to understand non-standard spoken English.

This project collected speech pattern recordings from over 2,000 participants in the Google project.

One of Google’s contribution to the Speech Accessibility Project is to make it easy for participants in Project Euphonia to anonymously contribute their speech pattern samples to the new accessibility project.

Google’s announcement stated:

“Our hope is that by making these datasets available to research and development teams, we can help improve communication systems for everyone, including people with disabilities.”

Advanced Speech Recognition

This new project is a milestone in the creation of technology that can serve those with non-standard speech patterns.

What makes this new project exciting is that all five technology companies will work together to solve speech recognition problems instead of working in separate silos.

Improving access to devices and the Internet for underserved communities benefits everyone.

Citations Google’s Announcement

New ways we’re making speech recognition work for everyone

Project Website

Official Website of the Speech Accessibility Project

You're reading Google, Apple, Meta, Amazon & Microsoft Join To Improve Voice Recognition

Can Google, Amazon Gang Up On Iphone?

Is Amazon the next big mobile device threat to Apple, Google and the also-rans such as Microsoft and Hewlett-Packard? It just might be, after a series of interesting launches in recent weeks.

On top of its new services, Amazon already has a premium video streaming service, and a large digital music store. With all these pieces in place, it sure looks like Amazon is interested in launching a set of mobile devices. In fact, rumor has it the company is hard at work right now on an Android-based tablet to compete with the iPad. No word on an Android phone — the Amazon Blaze was just an April Fools’ joke — but with the Appstore for Android up and running, a phone makes a lot of sense too.

But instead of being a threat to Google, Amazon’s stealth intrusion into the mobile world could be an opportunity for a Google-Amazon alliance to threaten Apple’s dominance with the iPhone. Sure, there may be more Android than iOS devices in the wild, but Android is still missing the unified iTunes experience of the iPhone. Google has apps, but nothing to compare with the iTunes Store’s selection of music and video, e-books and podcasts. Amazon, on the other hand, has almost all of this ready to go, short of a large podcasting library. Instead of trying to slug it out with Amazon by launching its own music store and other services, Google could leave all the peripheral stuff to Amazon and just focus on creating a great Android OS experience.

Amazon Needs Hardware

The only way past this problem is for Amazon to create its own hardware, partner with a hardware manufacturer that will install the app on the phone, or work with a manufacturer through Google — a company that already has experience with phone development. Amazon may still want to create its own hardware eventually, as some pundits suggest. But partnering with Google to create, for example, an HTC-made, Google-Amazon branded phone would give Amazon a jumpstart into phones with a device sporting two recognizable names.

Google Doesn’t Get Retail

Google may do a great job at producing the Android operating system, but the company is not great at providing a customer-driven retail experience.

Many routinely complain about how hard it is to surface quality apps on the Market with so much substandard fare available. Finally, Google’s failures at customer support with the Nexus One made it pretty clear this company is not a retailer.

Amazon Needs Early Access

Early access is why Motorola was the first company out of the gate with an Android 3.0 tablet, the Xoom. Amazon would need the same kind access to new builds of Android, and a Google-Amazon partnership would help with that.

Amazon Has The Goods

To compete with iTunes, Google also needs video, which means more negotiations with movie and television studios. Plus, if Google gets into the content-selling business and surfaces Google Music and Video at the top of its search results, this could provoke even more accusations of anticompetitive behavior from lawmakers and interest groups. Google would be better off leaving the content business to Amazon and focusing on making Android a great mobile OS instead.

Google Has the Web

Amazon may have a curated app store, videos, music, and e-books, but Google has a host of Web-based services Amazon doesn’t have — at least not yet. A Google-Amazon phone would have the wealth of Amazon’s apps and entertainment content, and the power of Google’s Web products such as Maps, Navigation, Gmail, Google Docs, and Search.

The problem is many of these products are only available to Android phones, as native smartphone apps, under special arrangement with Google. So an Amazon-Google partnership might ease the search giant’s concerns when that awkward conversation comes up about what to do with the Android Market on the new “AmazoGoogle” phones.

Connect with Ian Paul (@ianpaul) and Today@PCWorld on Twitter for the latest tech news and analysis.

Meta Group Report: Putting The Service Provider First

META Trend: During 2001/02, IT vendor management teams will increasingly integrate service levels with business objectives and use benchmarking to “market align” prices. Tactical contract management will continue, but strategic approaches will demand transaction (e.g., financial, service, technical) management and regulation (2001-04). Complex service provider interface functions will emerge (2001-05) to balance multi-sourced cross-process issues.

Divisions of Global 2000 (G2000) companies are leading a minority shift toward selecting a technology service provider before making application decisions. Although we anticipated this shift to occur by 2003/04, we are surprised to see the trend already taking root.

Despite our earlier predictions of delayed growth until 2003/04, we believe some near-term growth (2001/02) will shadow changes in corporate investment strategy. Moreover, we still do not expect wide-scale growth before 2003/04.

During the next 12 months, startup xSPs (i.e., service providers) will become overly optimistic based on the experience of this “false” start in the market, but will settle back from their euphoria in early 2002. Most IT organizations will still select an application that meets internal operational and integration requirements and then consider hosting options. During 2003/04, xSP offerings will be widely adopted by smaller companies, invoking an important shift as IT organizations choose a vendor before the application. This “primary” position will strengthen through 2006/07 until service providers supercede applications as the “first” decision.

Looking Forward

Our research also indicates a shift in the types of applications being managed by third parties. Currently, the market is centered on ERP applications (e.g., SAP, PeopleSoft). Although a few ASPs are positioning services around the CRM market, we believe such solutions are not feasible for most users because of the surrounding business process re-engineering (e.g., changes in sales processes) required to leverage CRM applications. During the next five years, we believe the greatest opportunity for xSP solutions lies in the following markets:

Collaborative applications: xSPs will play a significant role in hosting applications that provide internal and cross-company collaboration. Applications such as computer-aided design (CAD) currently stand out, but will expand to include tools such as Microsoft Office as it increases the collaborative capabilities of its suite.

Differentiation among service providers (through 2004) will be based on the applications they host and their expertise with specific applications. Increasingly, beginning in 2002/03 but becoming common by 2005, vendors will exhibit expertise around the integration of applications or on business solutions independent of specific applications/solutions.

Doing It Now

Companies selecting a vendor first must meet all the following criteria to avoid false claims or mistakes that can be costly to remedy:

Being completely agnostic to specific application or technologies, including integration with other corporate systems, legacy interfaces, or data conversions

Being capable of tolerating technology risk such as unplanned downtime, technology transitions, or other uncontrolled changes

Being relatively insensitive to price changes if prices increase significantly beyond early estimates (sometimes by 2x)Among many vendors, the two best positioned to provide such services are Corio and Jamcracker. Although they differ dramatically in business strategy, each offers specific skills or capabilities to make them feasible alternatives. Although larger vendors (e.g., EDS, IBM Global Services) will likely develop offerings, they will be unable to develop profitable offerings that target midmarket companies.

Corio: As an “original” application service provider (ASP), Corio was launched by integrating applications and positioning them toward small and medium enterprises. Corio has since branched into e-mail and ERP solutions but retains an understanding of smaller organizations. Although most companies find Corio responsive and helpful, a few have struggled to get responses from account managers that meet their custom needs. We believe divisions of larger corporations should be able to successfully work with and design technology solutions that range from ERP implementation/integration through basic e-mail/collaboration tools specific to the business. Companies requiring strong business expertise (i.e., vertical knowledge) should not depend on Corio to develop such skills (even if it promises); they should instead turn to other vendors already in possession of the needed skills.

Jamcracker: Jamcracker does not provide or host any of its own applications; rather, it works with various ASPs to integrate the needed solution. Therefore, Jamcracker can acquire and deliver nearly any combination of services. Clients should be careful of existing versus future integration (e.g., has Jamcracker truly integrated the applications or it is it merely sharing data over XML?). Moreover, clients should not expect Jamcracker to provide services at the lowest prices in the industry – there is a cost to integration and we do not believe Jamcracker intends or is capable of playing the low-cost card. Jamcracker has superb financial backing (Accenture) but must seek secondary funding during 2001, which has been a problem for other high-quality companies. Although we do not anticipate problems, funding remains a risk that IT organizations must consider.

Business Impact: Service providers offering business solutions will increasingly enable improved collaboration across applications, business units, and company boundaries.

Bottom Line: IT managers should continue to select applications ahead of hosting/service providers, but avoid optimistic vendor promises for the next several years.

Llama Model: Leveraging Meta Ai Large Language Model

Meta AI’s LLaMA (huge Language Model Meta AI) is a huge language model that was launched in February 2023. Model sizes ranging from 7 billion to 65 billion parameters were trained. The developers of LLaMA indicated that the 13 billion parameter model outperformed the considerably bigger GPT-3 (with 175 billion parameters) on most NLP benchmarks, and that the largest model was competitive with state-of-the-art models such as PaLM and Chinchilla.

LLaMA is a transformer-based language model, which implies it has a neural network architecture designed for machine translation. Transformers can learn long-term word dependencies, making them well-suited for jobs like natural language processing and generation.

Tips: Fill out this form to get weights for the LLaMA models.

After obtaining the weights, they must be changed to the Hugging Face Transformers format using the conversion script. The script can be invoked using the following (example) command:

python src/transformers/models/llama/ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path

Following conversion, the model and tokenizer can be loaded by using:

from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("/output/path") model = LlamaForCausalLM.from_pretrained("/output/path")

It should be noted that running the script requires enough CPU RAM to host the entire model in float16 precision (even though the largest versions arrive in numerous checkpoints, each of that holds a portion of the model’s weight, so we need to load them all in RAM). Thus, 130GB of RAM is required for the 65B model.

The LLaMA tokenizer is a sentence piece BPE model. When decoding a sequence, the tokenizer does not prepend the prefix space to the string if the first token represents the start of the word (e.g. “Banana”).

LLaMA Model class transformers.LlamaModel ( config: LlamaConfig ) LLaMA Model Parameters

config (LlaMAConfig) — Model configuration class containing all of the model’s parameters. When starting with a config file, only the configuration is loaded, not the weights associated with the model. To load the model weights, use the from_pretrained() method. config — LlaMAConfig

This model is also a subclass of PyTorch torch.nn.Module. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all basic usage and behavior questions.

Layered transformer decoder with config.num_hidden_layers layers. LlaMADecoderLayer is used for each layer.

forward ( input_ids: LongTensor = Noneattention_mask: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Nonepast_key_values: typing.Optional[typing.List[torch.FloatTensor]] = Noneinputs_embeds: typing.Optional[torch.FloatTensor] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) LLaMA Config Parameters

input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Token identifiers for input sequences in the vocabulary. Padding, if provided, will be neglected by default.

attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — To prevent performing attention on padding token indices, use a mask. Mask values in the range [0, 1]:

1 for unmasked tokens

0 for masked tokens.

If past_key_values is being used, just the last decoder_input_ids must be entered (see past_key_values).

If you want to change the padding behavior, read modeling_opt._prepare_decoder_attention_mask and make the necessary changes.

1 denotes that the head is not masked.

0 denotes that the head is masked.

position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of each input sequence token’s position in the position embeddings. The value [0, config.n_positions – 1] was chosen.

Contains pre-computed hidden-states (key and values in the self-attention and cross-attention blocks) that can be utilized to speed up sequential decoding (see past_key_values input).

If past_key_values is utilized, the user can choose input only the last decoder_input_ids of shape (batch_size, 1) (those that do not have their past key value states given to this model) instead of all decoder_input_ids of shape (batch_size, sequence_length).

inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Instead of giving input_ids, you can also pass an embedded representation directly. This is handy if you want greater control over how input_ids indices are converted into associated vectors than the model’s internal embedding lookup matrix provides.

use_cache (bool, optional) — If True, the key value states from the past are returned and can be used to speed up decoding (see past_key_values).

output_attentions (bool, optional) — Whether or not to return all attention layers’ attention tensors. For more information, see attentions under returned tensors.

output_hidden_states (bool, optional) — Whether or not to return all layers’ hidden states. For further information, see hidden_states under returned tensors.

return_dict (bool, optional) — Whether to return a Model Output instead of a simple tuple.

The __call__ special function is overridden by the LlaMA Model forward method.

Also read: For a more comprehensive overview of Gorilla refer to our guide  LLM Connected with APIs

FAQs of LLaMA Model

What exactly is the LLaMA model?

The LlaMA model is a sophisticated language-based machine learning model designed to harness the power of language for a wide range of machine applications. It employs cutting-edge techniques to improve natural language processing, comprehension, and generation.

How does the LLaMA model revolutionize machine learning?

The LlaMA model transforms machine learning by utilizing the powers of language. It allows robots to understand and generate human-like language, thereby bridging the gap between humans and intelligent systems. This discovery opens up new avenues for improved communication, decision-making, and data analysis

What are the applications of the LLaMA model?

The LlaMA model has a wide range of applications and areas. It can be utilized to deliver more human-like interactions in chatbots and virtual assistants. It has the potential to improve sentiment analysis, language translation, and content development. It can also be used in data analysis, customer support, and knowledge management systems.

How do I integrate the LlaMA model into my current AI infrastructure?

You can use machine learning frameworks that enable language-based models to integrate the LlaMA model into your AI infrastructure. Many libraries and platforms include pre-trained LlaMA models that can be fine-tuned or used as-is. In most cases, integration entails training the model on relevant data and integrating it into your existing systems or applications.


Five Reasons To Prefer Apple Maps Over Google Maps

Has there ever been an app that has caused so much of an uproar in the Apple community as Apple Maps? It was released in the fall of 2012 with iOS 6, and it was not received well to say the least. It was so poorly received that Tim Cook even wrote a letter apologizing for the poor launch of Apple Maps which contributed o the firing of Scott Forstall:

At Apple, we strive to make world-class products that deliver the best experience possible to our customers. With the launch of our new Maps last week, we fell short on this commitment. We are extremely sorry for the frustration this has caused our customers and we are doing everything we can to make Maps better.

Now that we are almost six years into Apple Maps, I am of the opinion that Apple was right, certainly in a post Facebook privacy scandal world, to replace Google Maps with their in-house mapping product. In fact, Google Maps isn’t on my iPhone, and here are five reasons I prefer Apple Maps over Google Maps.

1. Privacy 

Apple and Google have drastically different business models, and a Maps product is a key part of that. In fact, Google even has a product where you can see where you’ve been in the past. It’s called Google Timeline. While I can see why some would love this feature, I do not. Apple’s privacy policy for Apple Maps is much more to my liking:

You don’t have to sign in to use Maps. Personalized features, like letting you know when it’s time to leave for your next appointment, are created using data on your device. The data that Maps collects while you use the app — like search terms, navigation routing, and traffic information — is associated with random identifiers so it can’t be tied to your Apple ID. These identifiers reset themselves as you use the app to ensure the best possible experience and to improve Maps. Maps extensions that are used in ride-booking and reservation apps run in their own sandboxes and share permissions with their own parent apps. For ride-booking apps, Maps shares only your starting point and destination with the extension. And when you reserve a table at a restaurant, the extension knows only the point of interest you tapped.

Location data is one of the most private things you can share with someone. I’m not a “tin-foil” hat type person, but I do not want an app tracking everywhere I go.

2. Siri Integration

Is Siri the best voice assistant on the market? Most definitely not, but I also find it incredibly useful in the car. Being able to say “Hey Siri, give me directions home” is incredibly helpful while driving. Unless Apple allows users to replace Siri with a new default assistant (Google or Amazon), Siri will remain the best Assistant for iPhone users. You can also ask for directions to specific places (Hey Siri, give me directs to 123 Main Street, etc.).

3. Apple Watch

In a period where a lot of Apple Watch apps are disappearing, Apple Maps remains a built-in (and useful) feature. When you have your iPhone doing navigation, Apple Watch will vibrate with alerts to turn. This feature also works with walking directions. This feature alone makes Apple Maps an incredibly attractive platform if you wear Apple Watch.

4. Yelp Integration

Instead of having to build a database of company reviews, Apple Maps has Yelp integration to populate data. Yelp has been around for years and has a plethora of great data about local businesses. In fact, I use Yelp quite a bit for restaurant reservations. The integration of the two apps is well done, and a key part of the Apple Maps experience. I’d love to see Apple look into features like restaurant recommendations though.

5. Good Enough Maps Data

If I had to pick a product based on the map data alone, it would be hard to choose anything but Google Maps. They’ve been around for a lot longer than Apple Maps, and are continually getting better. On the flip side, Apple Maps hasn’t given me incorrect information in years. My non-scientific opinion is that Apple Maps data is 85% as good as Google. That 85% is 100% of what I need, and the other benefits of Apple Maps outweigh any negatives.

One final reason I love Apple Maps: it lists if a business takes Apple Pay.

FTC: We use income earning auto affiliate links. More.

Meta Attempts A New, More ‘Inclusive’ Ai Training Dataset

Meta hoped to help mitigate some of these concerns via Thursday’s release of Casual Conversations v2, an update to its 2023 AI audio-visual training dataset. Guided by a publicly available November literature review, the data offers more nuanced analysis of human subjects across diverse geographic, cultural, racial, and physical demographics, according to the company’s statement.

[Related: No, the AI chatbots (still) aren’t sentient.]

Meta states v2 is “a more inclusive dataset to measure fairness,” and is derived from 26,467 video monologues recorded in seven countries, offered by 5,567 paid participants from Brazil, India, Indonesia, Mexico, Vietnam, Philippines, and the United States who also provided self-identifiable attributes including age, gender, and physical appearance. Although Casual Conversations’ initial release included over 45,000 videos, they were drawn from just over 3,000 individuals residing in the US and self-identifying via fewer metrics.

Tackling algorithmic biases in AI is a vital hurdle in an industry long plagued by AI products offering racist, sexist, and otherwise inaccurate responses. Much of this comes down to how algorithms are created, cultivated, and provided to developers.

But while Meta touts Casual Conversations v2 as a major step forward, experts remain cautiously optimistic, and urge continued scrutiny for Silicon Valley’s seemingly headlong rush into an AI-powered ecosystem.

“This is [a] space where almost anything is an improvement,” Kristian Hammond, a professor of computer science at Northwestern University and director of the school’s Center for Advancing the Safety of Machine Intelligence, writes in an email to PopSci. Hammond believes Meta’s updated dataset is “a solid step” for the company—especially considering past privacy controversies—and feels its emphasis on user consent and research participants’ labor compensation is particularly important.

“But an improvement is not a full solution. Just a step,” he cautions.

To Hammond, a major question remains regarding exactly how researchers enlisted participants in making Casual Conversations v2. “Having gender and ethnic diversity is great, but you also have to consider the impact of income and social status and more fine-grained aspects of ethnicity,” he writes, adding, “There is bias that can flow from any self-selecting population.”

[Related: The FTC has its eyes on AI scammers.]

When asked about how participants were selected, Nisha Deo of Meta’s AI Communications team told PopSci via email, “I can share that we hired external vendors with our requirements to recruit participants,” and that compensatory rates were determined by these vendors “having the market value in mind for data collection in that location.”

When asked to provide concrete figures regarding pay rates, Meta stated it was “[n]ot possible to expand more than what we’ve already shared.”

Deo, however, additionally stated Meta deliberately incorporated “responsible mechanisms” across every step of data cultivation, including a comprehensive literature review in collaboration with academic partners at Hong Kong University of Science and Technology on existing dataset methodologies, as well as comprehensive guidelines for annotators. “Responsible AI built this with ethical considerations and civil rights in mind and are open sourcing it as a resource to increase inclusivity efforts in AI,” she continued.

For industry observers like Hammond, improvements such as Casual Conversations v2 are welcome, but far more work is needed, especially when the world’s biggest tech companies appear to be entering an AI arms race. “Everyone should understand that this is not the solution altogether. Only a set of first steps,” he writes. “And we have to make sure that we don’t get so focused on this very visible step… that we stop poking at organizations to make sure that they aren’t still gathering data without consent.”

Update the detailed information about Google, Apple, Meta, Amazon & Microsoft Join To Improve Voice Recognition on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!