Trending November 2023 # Building A New Defi Ecosystem With Keplerswap # Suggested December 2023 # Top 16 Popular

You are reading the article Building A New Defi Ecosystem With Keplerswap updated in November 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 Building A New Defi Ecosystem With Keplerswap

We encourage every crypto aficionado to take up this challenge to avail themselves and get familiar with the most challenging decentralized financial miracle of the year. KeplerSwap is building a new DeFi ecosystem that is BSC-based with massive Liquidity, LUCKY POOL, and SPACE at its core.

What does KeplerSwap solve?

We are at a crossroad, in that, we often complain that DEX functions under DeFi 1.0 is singularly and ecologically too benign to be able to sustain a robust development and that the DAO concept of decentralization cannot be achieved systematically, which is a barrier for the DeFi arena. KeplerSwap, however, continues to expand its trading ecosystem in which DeFi users reshape and adjust their user affiliations by breaking down social barriers in DeFi 1.0.  

With the help of the invitation affiliation, KeplerSwap user’s ecology is firmly grounded together; then again, users have a strong bond with each other and traders will be more inclined to exchange tokens in KeplerSwap and participate in liquidity marketing. Alternatively, multi-level invitation incentives can encourage trading activity and continue to promote stable growth to KeplerSwap.  

LUCKY POOL’s most magical function is to enable the entire trading ecology the certainty of earnings and relative blessing to achieve a complete closed loop. Users who reach a certain amount of liquidity market can participate in the LUCKY POOL raffle and liquidity marketing to obtain revenue. In addition, also enjoy the lucky draw every week to bring fantastic surprises, which is irresistible for all.

SPACE is the most insightful feature of the KeplerSwap ecosystem, where users vote for community governance, while users can create SPACE and receive additional rewards to complete the basic concept of decentralized finance here.

Consider that, in addition to the features that DEX currently has, KeplerSwap has these innovations that have never been seen before in other DEXs, seamlessly enumerating the drawbacks of DeFi 1.0.

Innovation is the biggest bright spot for Capital’s continued bullish view of KeplerSwap

In fact, the entire crypto-world, led by Capital, is desperately looking forward to a new DEX to inject a different dynamic into today’s decentralized finance, and KeplerSwap’s arrival is exactly what the world is expecting.

In recent years, we’ve seen a number of derivatives sectors emerge in DeFi, and there are now breakthroughs in many areas, including DEX, lending, stable currencies, predictors, aggregators, wallets, derivatives. However, in the AMM exchanges, we see too little innovation. There is an urgent need for KeplerSwap, the DeFi2.0 era exchange that brings a new experience to decentralized exchanges.

KeplerSwap is positioned as an explorer of DeFi2.0 and the first decentralized exchange under the DeFi2.0 architecture, starting with a decentralized exchange protocol based on the BSC public chain and gradually implementing multi-chain and cross-chain aggregation, making it easy for anyone to invest in and trade digital assets on the platform.

KeplerSwap is an attractive value proposition in every way. KeplerSwap takes technology research and development as the team’s main task and brings in top technicians from many countries around the world with true and reliable team strength.

Given the ambitious vision that the project will achieve in the future, KeplerSwap has developed a detailed and resilient roadmap and is moving forward in the direction it sets. According to the official plan, in 2023, KeplerSwap will achieve decentralized autonomy, and by 2024, will achieve the financial transformation of physical and digital assets, this innovative idea is also KeplerSwap’s most attractive place.

The current and next steps of the project

Since its introduction in 2023, the KeplerSwap concept has successfully completed functional development and testing at this stage after two years of technical logic and commercial model demonstration, and the official launch in Q4 of 2023.

Lately, KeplerSwap’s official website has been iterative, while the project’s white paper has been updated and the community has been conducting the latest round of airdrops simultaneously. After Certik completes the code audit, KeplerSwap will launch IDO and global public testing, just one step away from the project’s official takeoff.

You're reading Building A New Defi Ecosystem With Keplerswap

China’s New Drone Company Is Building A Uav With A 20


This twin-engine, dual-tail MALE (medium altitude, long endurance) attack drone will be able to carry about a ton of payload.

China’s drone newcomer Tengoen Technology (also spelled Tengdun) has ambitious plans. The company promises to market armed drones for purchase. It also promises to build the world’s biggest cargo drone. That’s quite a slate for a company that was only founded in 2023.

SF Express Delivery

In partnership with Tengoen, SF Express Delivery uses a modified TB-001 UCAV to drop cargo pods.

The TB-001 Scorpion, Tengoen’s flagship vehicle, is a twin-engine, double-tail drone. It has a maximum takeoff weight of 2.8 tons, a range of more than 3,700 miles, and provisions to carry two 220-pound bombs or missiles. Tengoen has also partnered with Chinese delivery company SF Express to build a souped-up TB-001 for cargo delivery, increasing the drone’s size to 3.3 tons, with a 1.2-ton payload. In December 2023, the modified TB-011 showed off its capability by para-dropping supplies to a Huawei repair crew fixing a cell tower in the mountainous Yunnan Province.

Super Cargo Drone

Tengoen is already building the first of these 137-foot-wide, 4,660-mile-range cargo drones.

In the cargo and delivery space, Tengoen is already at work building an eight-engine drone with a wingspan of more than 137 feet to carry a payload of 20 tons payload up to 4,660 miles. That’s akin to a medium-sized manned cargo plane.

The carbon-fiber, double-bodied drone carries the payload module between the two fuselages (looking a bit like a robotic baby brother to the Scaled Composites Stratolaunch). It is being built at Tengoen’s facility in Chengdu, and its supposedly taking flight in 2023.

A Drone for All Seasons

The behemoth can be customized for missions like search and rescue, aerial refueling, and intelligence gathering.

Tengoen executives were quick to highlight civilian applications for the unmanned aircraft system: space launch, fire fighting, and emergency relief. The drone’s large size and modular payload capacity could also take on a variety of military missions, including intelligence gathering and electronic warfare. Its large payload could make it function as an aerial tanker, too, refueling aircraft like search-and-rescue helicopters, patrolling fighters, cargo transports, and bombers.

Mass Private Helicopters

While the Ehang 184 is quite pricey, Ehang hopes that a mass-production run of its successors would achieve economies of scale, bringing down the price.

Tengoen is just one character in a larger story about China’s booming unmanned aerospace sector. In fact, it’s one of 110 UAV manufacturers in Chengdu alone. Other private Chinese manufacturers like Ehang and DJI have products with dual-use applications, too. As China’s multibillion-dollar drone industry grows in size and sophistication, China’s private sector aims for a bigger share of both the People’s Liberation Army’s purchases, not to mention a slice of the wider global market.

Peter Warren Singer is a strategist and senior fellow at the New America Foundation. He has been named by Defense News as one of the 100 most influential people in defense issues. He was also dubbed an official “Mad Scientist” for the U.S. Army’s Training and Doctrine Command. Jeffrey is a national security professional in the greater D.C. area.

You may also be interested in:

Building A Blockchain In Python

This article was published as a part of the Data Science Blogathon.

Introduction on Blockchain

Each block in a blockchain is unique and contains a hash value, which can be used for differentiating each block. Fingerprinting is the concept used for linking blocks on a blockchain. As new blocks keep getting added to the end of the blockchain, the hash of the penultimate block is used for building the hash for a new block, which makes the blocks in a blockchain tamper-proof.

In this article, we will be building a simple blockchain in python that will store some text information from users. The blockchain technology used in the industry is far more complex than the blockchain that we will be building in this article, but this is enough to understand blockchain technology from an implementation perspective.

Each block on our blockchain will have the following properties:

Index – This is an auto-incremented number used to recognize a block in the blockchain.

Sender – The user who created the block on the blockchain.

Timestamp – The time at which the block was created.

Previous hash – The hash value of the preceding block in the chain. This is useful in verifying the integrity of a blockchain by fingerprinting and linking the blocks in the blockchain.

Hash – This is a hash generated using all the above-mentioned properties present in a block. This property uniquely identifies a block in a blockchain.

Nonce – This is a number that helps in creating the hash as per the difficulty requirement set for the blockchain.

The first block of a blockchain is called the genesis block. From the above explanation, it can be derived that the preceding hash cannot be extracted from the blockchain as the blockchain is empty. In such a case, the preceding hash is generated using some secret specified by the creator of that blockchain. This ensures that all the blocks in a blockchain have a similar structural schema. Each blockchain has a difficulty level associated with it. It specifies the number of digits that need to be 0 in the hash. To satisfy this condition, we have the nonce property, which is a whole number that helps in generating the hash with the specified number of preceding zeros. Since the hashing algorithm used in most blockchain technology is SHA256, it is almost impossible to find the nonce by pre-calculating the hash value. Trial-and-error is the only way to calculate the nonce, which makes it computationally expensive and time-consuming. We need to run a for loop to calculate the nonce value. The process of guessing the nonce that generates the hash as per the requirements is called blockchain mining and is computationally expensive and time-consuming but is necessary to add a block to the blockchain. We will set the difficulty level to 4 for our blockchain and so the first 4 letters of each hash should have ‘0000’. For mining bitcoin, the difficulty level is set to 30, and mining each block in bitcoin roughly takes 10 minutes.

Now that we have a basic understanding of the blockchain that we will be building in this article, let’s get our hands dirty and start building.

Importing Modules

There is nothing fancy here; all the modules used in building the blockchain are native python modules and can be directly imported without having to install them using pip. We will be using the hashlib module for performing SHA256 hashing while the time module will be useful to fetch the block generation time.

import hashlib from time import time from pprint import pprint Building the Blockchain

We will define a class called blockchain with two properties, namely blocks and secret. The blocks property will store all the blocks on the blockchain while the secret variable will be used for building the previous hash for the genesis block. We will define three functions, namely create_block, validate_blockchain, and show_blockchain. The create_block function will be used to create a new block and append it to the block’s property in the blockchain. The properties of each block explained earlier will be implemented here. The nonce that satisfies the blockchain requirement of having four zeros preceding each hash will be calculated. The validate_blockchain function will be used to validate the integrity of the blockchain. This means that it will check the fingerprinting on each block on the blockchain and tell us if the blockchain is stable or not. Each block should contain the correct hash of the previous block. If there are any discrepancies, it is safe to assume that someone has meddled with the blocks on the blockchain. This property makes blockchains immutable and tamper-proof. Finally, the show_blockchain function will be used to display all the blocks on the blockchain.

class blockchain(): def __init__(self): self.blocks = [] self.__secret = '' self.__difficulty = 4 # guessing the nonce i = 0 secret_string = '/*SECRET*/' while True: _hash = hashlib.sha256(str(secret_string+str(i)).encode('utf-8')).hexdigest() if(_hash[:self.__difficulty] == '0'*self.__difficulty): self.__secret = _hash break i+=1 def create_block(self, sender:str, information:str): block = { 'index': len(self.blocks), 'sender': sender, 'timestamp': time(), 'info': information } if(block['index'] == 0): block['previous_hash'] = self.__secret # for genesis block else: block['previous_hash'] = self.blocks[-1]['hash'] # guessing the nonce i = 0 while True: block['nonce'] = i _hash = hashlib.sha256(str(block).encode('utf-8')).hexdigest() if(_hash[:self.__difficulty] == '0'*self.__difficulty): block['hash'] = _hash break i+=1 self.blocks.append(block) def validate_blockchain(self): valid = True n = len(self.blocks)-1 i = 0 while(i<n): if(self.blocks[i]['hash'] != self.blocks[i+1]['previous_hash']): valid = False break i+=1 if valid: print('The blockchain is valid...') else: print('The blockchain is not valid...') def show_blockchain(self): for block in self.blocks: pprint(block) print()

Now that we have built the blockchain class, let’s use it to create our blockchain and add some blocks to it. I will add 3 blocks to the blockchain and will validate the blockchain and finally print the blocks and look at the output.

Python Code:

We can see the blocks present on the blockchain and that the validate_blockchain function returns true. Now let’s meddle with our blockchain and add a new block somewhere in-between the blocks of the blockchain and run the validate_blockchain function to see what it returns.

block = { 'index': 2, 'sender': 'Arjun', 'timestamp': time(), 'info': 'I am trying to tamper with the blockchain...' } block['previous_hash'] = b.blocks[1]['hash'] i = 0 while True: block['nonce'] = i _hash = hashlib.sha256(str(block).encode('utf-8')).hexdigest() if(_hash[:4] == '0'*4): block['hash'] = _hash break i+=1 b.blocks.insert(2, block) b.show_blockchain() b.validate_blockchain()

This is the output we get.

{'hash': '0000bfffcda53dc1c98a1fbaeab9b8da4e410bbcc24690fbe648027e3dadbee4', 'index': 0, 'info': 'Python is the best programming language!!', 'nonce': 91976, 'previous_hash': '000023ae8bc9821a09c780aaec9ac20714cbc4a829506ff765f4c82a302ef439', 'sender': 'Ram', 'timestamp': 1654930841.4248617} {'hash': '00006929e45271c2ac38fb99780388709fa0ef9822c7f84568c22fa90683c15f', 'index': 1, 'info': 'I love cybersecurity', 'nonce': 171415, 'previous_hash': '0000bfffcda53dc1c98a1fbaeab9b8da4e410bbcc24690fbe648027e3dadbee4', 'sender': 'Vishnu', 'timestamp': 1654930842.8172457} {'hash': '000078a974ba08d2351ec103a5ddb2d66499a639f90f9ae98462b9644d140ca9', 'index': 2, 'info': 'I am trying to tamper with the blockchain...', 'nonce': 24231, 'previous_hash': '00006929e45271c2ac38fb99780388709fa0ef9822c7f84568c22fa90683c15f', 'sender': 'Arjun', 'timestamp': 1654930848.2898204} {'hash': '0000fe124dad744f17dd9095d61887881b2cbef6809ffd97f9fca1d0db055f2a', 'index': 2, 'info': 'AI is the future', 'nonce': 173881, 'previous_hash': '00006929e45271c2ac38fb99780388709fa0ef9822c7f84568c22fa90683c15f', 'sender': 'Sanjay', 'timestamp': 1654930845.594902} The blockchain is not valid...

We can see that the validate_blockchain function returns false because there is some mismatch in the fingerprinting and hence the integrity of our blockchain has been compromised.

In this article, we discussed the following:

What is blockchain and how to build a blockchain using Python?

Properties of blockchain

Fingerprinting in blockchain

Difficulty level and nonce in blockchain

Building our own blockchain

Tampering with the blocks

Checking for the integrity of the tampered blockchain

To continue this project further, the blockchain can be hosted and deployed as a REST API server on the cloud that can be used by the users to store information on the blockchain. Obviously, our blockchain is not distributed for the sake of simplicity. If you are really interested in using blockchain technology for your database, feel free to look at BigchainDB, which is a decentralized blockchain database. It provides support for both python and nodejs. Alternatively, GunDB is a popular graph-based decentralized database engine being used in web3 applications in recent times.

That’s it for this article (building blockchain using Python). Hope you enjoyed reading this article and learned something new. Thanks for reading and happy learning!

 The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.


Tensorflow Functional Api: Building A Cnn

This article was published as a part of the Data Science Blogathon


In today’s article, I will talk about developing a convolutional neural network employing TensorFlow Functional API. It will dispense the capability of functional API, which allows us to produce hybrid model architecture surpassing the ability of a primary sequential model.

Photo by Rita Morais from Unsplash

About: TensorFlow

TensorFlow is a popular library, something you perpetually hear probably in Deep Learning and Artificial Intelligence society. There are numerous open-source packages and projects for deep learning.

TensorFlow, an open-source artificial intelligence library managing data flow graphs, is the most prevalent deep-learning library. It is used to generate large-scale neural networks with countless layers.

TensorFlow is practiced for deep learning or machine learning predicaments such as Classification, Perception, Perception, Discovery, forecast, and Production.

So, when we interpret a classification problem, we apply a convolutional neural network model. Still, most developers were intimate with modeling sequential models. The layers accompany each other one by one.

The sequential API empowers you to design models layer-by-layer for most significant problems.

The difficulty is restricted in that it does not allow you to produce models that share layers or have added inputs or outputs.

Because of this, we can practice Tensorflows Functional API as Multi-Output Model.

Functional API (tf.Keras)

The functional API in tf.Keras is an alternative way of building more flexible models, including formulating a further complex model.

For example, when implementing an insignificantly more complicated example with machine learning, you may rarely face the state when you demand added models for the same data.

So we would need to produce two outputs. The most manageable option would be to build two separate models based on the corresponding data to make predictions.

This would be gentle, but what if, in a present scenario, we need to have 50 outputs. It could be a discomfort to maintain all those separate models.

Alternatively, it is more fruitful to construct a single model with increased outcomes.

In the open API method, models are determined by forming layers and correlating them straight to each other in sets, then establishing a Model that defines the layers to function as the input and output.

What is different in Sequential API?

Sequential API enables you to generate models layer-by-layer for most top queries. It is regulated because it does not allow you to design models that share layers or have added inputs or outputs.

Let us understand how to create an object of sequential API model below:

model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation=’relu’), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=’softmax’) ])

In functional API, you can design models that produce a lot more versatility. You can undoubtedly fix models where layers relate to more than just the preceding and succeeding layers.

You can combine layers with several other layers. As a consequence, producing heterogeneous networks such as siamese networks and residual networks becomes feasible.

Let’s begin to develop a CNN model practicing a Functional API

In this post, we utilize the MNIST dataset to build the convolutional neural network for image classification. The MNIST database comprises 60,000 training images and 10,000 testing images secured from American Census Bureau workers and American high school juniors.

# import libraries import numpy as np import tensorflow as tf from tensorflow.keras.layers import Dense, Dropout, Input from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten from tensorflow.keras.models import Model from tensorflow.keras.datasets import mnist # load data (x_train, y_train), (x_test, y_test) = mnist.load_data() # convert sparse label to categorical values num_labels = len(np.unique(y_train)) y_train = to_categorical(y_train) y_test = to_categorical(y_test) # preprocess the input images image_size = x_train.shape[1] x_train = np.reshape(x_train,[-1, image_size, image_size, 1]) x_test = np.reshape(x_test,[-1, image_size, image_size, 1]) x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255

In the code above,

I distributed these two groups as train and test and distributed the labels and the inputs.

Independent variables (x_train and x_test) hold greyscale RGB codes of 0 to 255, whereas dependent variables (y_train and y_test) carry labels of 0 to 9, describing which number they genuinely are.

It is a good practice to normalize our data as it is constantly required in deep learning models. We can accomplish this by dividing the RGB codes by 255.

Next, we initialize parameters for the networks.

# parameters for the network input_shape = (image_size, image_size, 1) batch_size = 128 kernel_size = 3 filters = 64 dropout = 0.3

In the code above,

input_shape: variable represents a need to plan and style a standalone Input layer that designates input data. The input layer accepts a shape argument that is a tuple that describes the dimensions of the input data.

batch_size: is a hyperparameter that determines the number of samples to run through before refreshing the internal model parameters.

kernel_size: relates to the dimensions (height x width) of the filter mask. Convolutional neural networks (CNN) are essentially a pile of layers marked by various filters’ operations on the input. Those filters are ordinarily called kernels.

filter: is expressed by a vector of weights among which we convolve the input.

Dropout: is a process where randomly picked neurons are neglected throughout training. This implies that their participation in the activation of downstream neurons is temporally dismissed on the front pass.

Let us define a simplistic Multilayer Perceptron, a convolutional neural network:

# utiliaing functional API to build cnn layers inputs = Input(shape=input_shape) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(inputs) y = MaxPooling2D()(y) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(y) y = MaxPooling2D()(y) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(y) # convert image to vector y = Flatten()(y) # dropout regularization y = Dropout(dropout)(y) outputs = Dense(num_labels, activation='softmax')(y) # model building by supplying inputs/outputs model = Model(inputs=inputs, outputs=outputs) In the code above,

We specify a multilayer Perceptron model toward binary classification.

The model holds an input layer, 3 hidden layers beside 64 neurons, and a product layer with 1 output.

Rectified linear activation functions are applied in all hidden layers, and a softmax activation function is adopted in the product layer for binary classification.

And you can observe the layers in the model are correlated pairwise. This is achieved by stipulating where the input comes from while determining each new layer.

As with every Sequential API, the model is the information we can summarize, fit, evaluate, and apply to execute predictions.

TensorFlow presents a Model class that you can practice to generate a model from your developed layers. It demands that you only define the input and output layers—mapping the structure and model graph of the network architecture.

Lastly, we train the model.

optimizer=’adam’, metrics=[‘accuracy’]), y_train, validation_data=(x_test, y_test), epochs=20, batch_size=batch_size) # accuracy evaluation score = model.evaluate(x_test, y_test, batch_size=batch_size, verbose=0) print(“nTest accuracy: %.1f%%” % (100.0 * score[1]))

Now we have successfully developed a convolutional neural network to distinguish handwritten digits with Tensorflow’s Functional API. We have obtained an accuracy of above 99%, and we can save the model & design a digit-classifier web application.


The media shown in this article on Sign Language Recognition are not owned by Analytics Vidhya and are used at the Author’s discretion.


Building A Diverse, Strong Climate Workforce

Building a Diverse, Strong Climate Workforce Climate leaders from Boston University and around the country brief Congress on how they are working to train the next generation for careers solving the climate crisis

Pamela Templer (center), a BU College of Arts & Sciences professor of biology, participated in a virtual Congressional briefing event titled Building the Next Generation Climate Workforce, along with Shawn Jones (left), head of energy storage development at BlueWave Solar, and PhD candidate Yasmin Romitti (Pardee’12, GRS’25).

Climate Change

Building a Diverse, Strong Climate Workforce Climate leaders from Boston University and around the country briefed Congressional staff on how they are working to train the next generation for careers solving the climate crisis

Climate change is throwing problems at society that are unlike any we’ve experienced before—from battling extreme heat to combating flooding from sea level rise. It’s forcing us to not only build adaptations for a changing world, but take on the monumental tasks of creating green infrastructure and transitioning from carbon-emitting fossil fuels to renewable energy. And every piece of solving the climate puzzle will require more and more involvement by people from all walks of life. 

In the face of unfolding climate challenges, many universities, community organizations, and private companies are training people who want to build a career in the climate workforce, whether in science, management, policy, communication, or governance fields. On June 9, Boston University brought together leaders in climate change solutions and research for a Congressional briefing on how to build a workforce centered on meeting the global challenges from climate change, a panel called Building the Next Generation Climate Workforce: Innovative Solutions from Around the Country.  

The briefing was hosted by BU Federal Relations and featured four different panelists with expertise in science-backed policymaking, expanding diversity in STEM, and preparing to meet energy demands from renewable sources like solar and wind. 

“The media often latches on to the darkest and worst side of an issue and climate change is no different,” said panel moderator Melissa Varga, science network community and partnerships manager for the Union of Concerned Scientists Center for Science and Democracy, in her opening remarks. ”Don’t get me wrong, the situation is dire. Marginalized and communities of color are the worst and first hit, and immediate action is needed. But rarely do we hear about the innovative efforts that are actually working for communities.” Each speaker at the virtual panel shared how their work contributes to training a diverse climate workforce. 

Varga introduced US Rep. Suzanne Bonamici (D-Ore.), who spoke about how Congress has a role to play supporting that workforce, including through legislation that can expand job training opportunities to communities who have historically been left behind. “Here in the Pacific Northwest, we feel these effects [of climate change] acutely,” she said. “Our only option moving forward is to implement sweeping adaptation measures and decarbonize as rapidly as possible.” 

The panelists included Shawn Jones, the head of energy storage development at BlueWave Solar, a solar energy developer based in Boston, Pamela Padilla, president of the Society for the Advancement of Chicanos/Hispanics and Native Americans in Science and University of North Texas vice president of research and innovation, BU PhD student Yasmin Romitti (Pardee’12, GRS’25), and Pamela Templer, a College of Arts & Sciences professor of biology and the director of BU’s URBAN Program.  

“To me, a climate workforce means successfully preparing our graduates to tackle climate challenges,” Templer told The Brink prior to the event. “We know that temperatures are rising, weather events are becoming more extreme, and these are all impacting human health and well-being. Having graduates understand how we can both reduce the greenhouse gas emissions that lead to climate change and create solutions to help humans adapt to the ways that climate is already changing is essential.” 

The URBAN Program (Urban Biogeoscience and Environmental Health), funded by a $3 million National Science Foundation Research Traineeship Program grant, helps graduate students gain expertise in fields that include environmental health, biology, engineering, statistics, and more, as well as the skills needed to work across disciplines. The program offers professional development workshops, training students in science communication, the workings of municipal governments, and collaborating effectively with city leaders.

“I think about the droughts in the Southwest, and how the Navajo Nation is addressing this. It’s important that we communicate knowledge to families and our elders in the community so they understand and know how they can participate,” Padilla said. She also discussed how government agencies can bolster her organization’s work in increasing participation from Chicano, Hispanic, and Native American people in science and technology. “They can move the needle with dollars, as well as support.” 

On the private industry side of the climate workforce, Jones is working to transform access to renewable energy with community solar projects and storage solutions, while volunteering in the community and offering students mentoring opportunities.   

“You don’t need a science degree to tackle climate change,” Jones said. He also echoed the need to bring diverse talent and individuals from all backgrounds into climate-centered careers. 

“Everyone understands what a doctor does, what lawyers and engineers do, but the climate work? I still don’t think people understand what we actually do, so we have to bring a little bit more transparency to our process, so [more people] understand what skills are needed,” he said.  

Asked how Congress can further support efforts to train students working in climate, Templer said that “Congress can continue to support graduate training grants—like the National Science Foundation Research Traineeship Program—that enable universities like BU to create new interdisciplinary hands-on, solutions-oriented training programs. It would be helpful if Congress could allocate more funds to ensure that successful graduate programs like ours continue.” 

Romitti, a climate, energy, and health student who’s researching how people adapt to heat in cities, said the URBAN Program has been invaluable, providing her with a community and experience creating policy-relevant research. “Looking ahead, I’m very excited to be part of this next-generation climate workforce,” Romitti said. “What’s next for me, I don’t exactly know, but I think that’s a good thing, because it just shows that because of the training and opportunities I’ve had with URBAN, I feel like I have the tools and the resources to be a competitive applicant across different sectors.”

All the panelists shared their hope for more investment at the federal level in training students interested in climate-focused careers that can elevate diversity and bring more opportunities for communities that are impacted the most by climate change. 

Varga closed by saying: “There’s so much to be optimistic about, and yet still so much more work to do.”

Explore Related Topics:

Get A More Useful New Tab Page For Chrome With Humble New Tab

Adding Content to the “New Tab Page”

First, grab Humble New Tab from the Chrome Web Store. The confirmation dialog that pops up may look intimidating, but the long list of permissions are necessary to give Humble New Tab the ability to manage your new tab page as thoroughly as it does.

You’re Done!

Bertel King, Jr.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

Update the detailed information about Building A New Defi Ecosystem With Keplerswap on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!