Friday, 30 June 2023

A Fourth Of July Suggestion

The Fourth of July is my least favorite holiday. Not because I'm unpatriotic. But because I have dogs. Every year around this time, we close up all the windows, regardless of the weather, turn on the air conditioning and leave the TV or radio on for noise value if nothing else. And regardless of the precautions we take, i always end up trying to comfort at least one of our dogs who is scared out of its mind by some idiot or idiots with fireworks.

Animal shelters must hate this time of year too, since it is the time they get the most lost and stray dogs brought in because they got scared and take off.

Furthermore, it's not just animals that do not appreciate your personal firework display:

fireworks_are_not_appreciated_by_everyone

So this year, let's try something different. Instead of buying all those fireworks, about buying some cans of dog or cat food or some bags of kibble and donating it to the local animal shelter.

This year, don't make a noise - make a difference.

Open thread below...

read more



from Latest articles from Crooks and Liars
via Click me for Details

Goldman may be trying to bail on Apple Card

Four years after partnering with Apple on the launch of the Apple Card, Goldman Sachs may be eyeing the exits.

The Wall Street Journal reports that Goldman is “looking for a way out” of its high-profile deal with Apple, which recently expanded to include savings accounts for Apple Card holders. 

The investment banking firm is apparently in talks to offload the partnership to American Express, the WSJ report added, but so far nothing seems to be set in stone, nor is it clear if Apple would support the handoff.

However, it wouldn’t be surprising if such an arrangement comes to pass. Earlier this year, Goldman CEO David Solomon said he was “considering strategic alternatives” for the investment firm’s consumer arm. Beyond its deal with Apple, Goldman’s consumer-facing business includes a credit card partnership with General Motors as well as GreenSky, the lending company Goldman bought for $2.2 billion in 2021.

For their part, Apple and Goldman did not immediately respond to requests for comment on the WSJ story. CNBC later published a similar report, citing its own unnamed sources.

Goldman may be trying to bail on Apple Card by Harri Weber originally published on TechCrunch



from TechCrunch
via Click me for Details

Bird founder Travis VanderZanden officially leaves the nest

Travis VanderZanden’s slow-motion departure from Bird is now complete. The scooter rental company announced in a late-Friday news dump that the executive has stepped down from his role as chairperson of Bird’s board, “effective immediately.”

Replacing VanderZanden is John Bitove, who played a role in saving Bird’s bacon this past December via its merger with Bird Canada.

VanderZanden had led the micromobility company from its inception as its president and founding CEO, but that all changed last year when Bird’s declining stock price culminated in a delisting warning from the New York Stock Exchange. Soon after, VanderZanden stepped down from his role as president, handing over the title to Bird’s then-chief operating officer Shane Torchiana. Torchiana went on to assume the CEO post as well several months later. At the time, VanderZanden called the reorg a “long-planned transition.”

According to Bird, VanderZanden “decided to step down [from the board] to pursue other ventures.” In a similarly vague yet intriguing statement, VanderZanden added that he intends to return to his “entrepreneurial roots and incubate some new ideas.”

TechCrunch has reached out for more information on the founder’s departure and will update this story when we hear back.

Bird founder Travis VanderZanden officially leaves the nest by Harri Weber originally published on TechCrunch



from TechCrunch
via Click me for Details

Right Wing Tries To Hijack MLK's Messages

After the kangaroo Christian nationalist Supreme Court overturned affirmative-action, Republican lawmakers and their media minions tried to erase Martin Luther King Jr.'s powerful messages and replace them with their conservative, racist beliefs.

And let's not forget, in this opinion, race is not done.

They said that they can talk about individual struggles.

So you can use your race to show your character, to show how you overcame adversity, your strength and your courage.

Look, as a young man, I lost my father to a drunk driver.

My brother committed suicide.

So when I wrote my essays, it wasn't about that, but it was about how I overcame that.

It showed my character.

That's what won today rather than the color of your skin.

You know, one of the things that many people, especially on the right, have taken away is that this basically fulfills Dr. King's dream of saying that you would be judged by the content of your character, not by the color of your skin.

Yeah, that is a good point.

But I would say Dr. King would probably be against this decision, and he would be profoundly shocked by it.

This a-hole uses his personal tragedies as a template to brag about his character? That has nothing to do with civil rights on any level. Racism is not about character, but all about hatred. How can one's character be properly judged when you are immediately dismissed by the color of your skin?

read more



from Latest articles from Crooks and Liars
via Click me for Details

Vice President Of Russian Bank Mysteriously Falls Out Of Window

Unlike a lot of these types of stories out of Russia, this one doesn't seem to be politically motivated, just some random drama. Baikova's boyfriend was also at the scene in her apartment when she fell to her death. Oddly though, it took nearly a week for the incident to become news in Russia.

Source: Daily Mail

The glamorous vice-president of a Russian bank has reportedly plunged to her death after falling from the window of her Moscow apartment.

Kristina Baikova, 28, an executive at Loko-Bank, is just the latest mysterious casualty involving Russia's top business people.

Ms Baikova allegedly fell from her 11th floor apartment on Khodynsky Boulevard in the early hours of last Friday. She died instantly at the scene.

The bank executive was with a 34-year-old friend, thought to be named Andrei, at the time of the incident after inviting him over to her home for a drink.

An investigation into her death has been launched.

Are corporations too influential?

Welcome to Startups Weekly. Sign up here to get it in your inbox every Friday.

This week, I’ve been doing a lot of thinking about how some of the biggest companies in the world have as much — if not more — power than entire countries. Most countries, at least, have some level of democratic oversight, but that isn’t true in the same way for companies. My question, then: In a world where the policies of, say, Facebook, YouTube and Twitter become de facto standards all around the world, should we have a greater degree of say (TC+) in what those policies are?

The other thing that’s kept me busy this week is fundraising. Alex talked with 11 VCs (TC+) about how hard it was for their companies to raise so far this year. Meanwhile, I talked with a number of founders who were really struggling to raise money. The truth is, the founders struggling the most have three things in common (TC+).

Now let’s take a look at what happened in the world of startups this week.

Notes from the security frontlines

two figures using phones amidst location pins on a map

Image Credits: Bryce Durbin / TechCrunch

The most popular story on TechCrunch in the past week was one of my own, which came with a curious backstory: Flipper Devices was founded in Moscow, Russia, in 2020, by a Ukrainian founder and a largely Russian team. I ran the headline that a “Russian hacking device” had made $80 million worth of sales, only for a bunch of PR people to get very upset with me for calling the company, which was founded in Russia and whose team is still 90% Russian, Russian. Don’t get me wrong, I get why a company making a hacking device might not want to be associated with Russia — and the company has gone to great lengths to scrub any traces of that connection from the internet. The whole story was pretty weird, and concluded with me getting an unsolicited scan of the founder’s (Ukrainian) password in my email inbox. Very curious indeed.

That sounds secure…: In a beacon of “here’s what not to do,” Lorenzo reports that an Illinois high school accidentally changed every student’s password to ‘Ch@ngeme!’. The problem? For a moment there, every student knew every other student’s password. D’oh.

Stupid and pointless: Prosecutors called for the British hacker who was responsible for the 2020 Twitter breach to serve at least seven years. Zack reports that the hacker was sentenced to 5 years behind bars. The convicted hacker described his crimes as “stupid and pointless.” Who am I to disagree?

Watching the watchers: Zack reports that Polish-developed stalkerware LetMeSpy, a phone-tracking app, says it was hacked. The leaked data included years of victims’ call logs and text messages dating back to 2013.

News you can touch. Yep, it’s hardware.

AI, artificial intelligence,

Image Credits: Getty Images

A ton of interesting things happened in startup hardware land this week. Uplift Labs signed an interesting deal with Major League Baseball to use the startup’s 3D motion tracking tech to help scout for promising players.

Fast on the heels of its previous $14 million fundraise, Realtime Robotics raised another $10 million or so, representing the third close on what now seems like a never-ending Series A financing for the manufacturing automation startup.

Apropos robotics, Brian also had a fascinating story today on how robots are learning from watching YouTube videos. If my YouTube recommendations are anything to go by, every robot in the world will very soon be expert woodworkers and do very stupid things with explosives.

Who’s a good bot? That’s right, you’re a good bot: In a, “Geez, I feel safer already” type moment, Brian reports that the House GOP discussed the use of robot dogs to patrol U.S. borders.

It flies and it counts. That’s just what it does: Kate reports that B Garage raised $20 million for its warehouse inventory drones. And as we’re talking about flying inventory drones, Brian reported that Gather AI bought drone inventory competitor Ware.

Walking? Feh, check the webcam: The lazy among us may have pointed a webcam at the oven to keep an eye on a pizza, but Devin reports that Lilz takes the same concept to a whole ‘nother level, bringing its gauge-watching smart cameras to the U.S. and raising $4 million.

Startups that are going places

two joby aviation evtols set in front of a sunset

Image Credits: Joby Aviation

Raise your hand if you saw this one coming (while I sit on my hands, because I really did not) — but it seems like the Tesla charging standard is gaining a foothold very quickly. First, Texas said that state-funded EV chargers had to include Tesla plugs (now known as the North American Charging Standards, or NACS), and it seems like Washington state may be following suit.

Wheeee: You couldn’t force me on board one of these things with a gun, but Joby Aviation has reasons to celebrate, as Rebecca reports that the company received a permit to fly its first eVTOL built on a production line.

Pulling the e-brake: Kate reports that Singapore’s ride-hailing firm Grab lays off over 1,100 employees, representing around 11% of its staff — its first big round of layoffs since 2020.

End of the road for Lordstown: It’s been an uphill battle for Lordstown Motors. Rebecca reports that the company is suing Foxconn, claiming fraudulent conduct that “destroyed” the American company’s business. Over on TC+, Alex ponders that there’s not a lot of SPAC deals left that didn’t come crashing down painfully and spectacularly. Canoo, anyone?

Despite all its rage, it is still just a car in a cage: Even as Lordstown implodes and a lot of the other EV companies are struggling, Faraday Future raises $90 million to keep itself alive.

Top reads on TechCrunch

forcite smart helmet

Image Credits: Forcite

Foo-wee, it’s been a lively week. My personal favorite was Tim’s story about Forcite launching a $1,100 smart helmet, finally bringing a version of the decade-old Skully dream to fruition.

U so basic: Netflix decided that it had enough of letting its users skate by on the cheap, and Ivan reported that the streaming giant quietly axed its basic plan in Canada.

We totally have lots of users, promise! Some strange dodginess this week — Amanda reported that Unicorn social app IRL is to shut down after admitting 95% of its users were fake.

Yeah, saw that one coming: In my very personal opinion, Shein — and other, similar purveyors of essentially disposable clothing — is the literal worst for the environment. It seems like the company got a sheen of comeuppance, as Amanda reports that an influencer’s highly curated trip to a Chinese factory backfired.

The crowd is going Vilnius: Europe keeps investing huge sums of money into tech ecosystems, and Paul reports that Lithuania’s capital Vilnius is about to invest more than $100 million into “Europe’s largest tech campus.”


Get your TechCrunch fix IRL. Join us at Disrupt 2023 in San Francisco this September to immerse yourself in all things startup. From headline interviews to intimate roundtables to a jam-packed startup expo floor, there’s something for everyone at Disrupt. Save up to $600 when you buy your pass now through August 11, and save 15% on top of that with promo code STARTUPS. Learn more.

Are corporations too influential? by Haje Jan Kamps originally published on TechCrunch



from TechCrunch
via Click me for Details

Thursday, 29 June 2023

Ron DeSantis Is The Wish.com Version Of Rick Perry

Gov. Rick DeSantis sounds like Rick Perry, who led the Department of Energy. So far, that's not so weird, but it's also the agency that the former Governor of Texas wanted to abolish but forgot its name during a 2011 presidential debate.

DeSantis said he would seek to abolish the Departments of Education, Commerce, and Energy and the IRS. He's a small-government type of Republican. He wants the government so small that there will no longer be oversight of our country's nuclear warheads -- because that's what the Department of Energy does.

NBC News reports:

Florida Gov. Ron DeSantis said Wednesday that if he is elected president he would seek to close four federal agencies as part of an effort to reduce the size of government.

"We would do Education, we would do Commerce, we'd do Energy, and we would do IRS," DeSantis said in an interview with Fox News's Martha MacCallum when he was asked whether he favored closing any agencies.

"If Congress will work with me on doing that, we'll be able to reduce the size and scope of government," he added. "If Congress won't go that far, I'm going to use those agencies to push back against woke ideology and against the leftism that we see creeping into all institutions of American life."

read more



from Latest articles from Crooks and Liars
via Click me for Details

How confidential computing could secure generative AI adoption

Generative AI has the potential to change everything. It can inform new products, companies, industries, and even economies. But what makes it different and better than “traditional” AI could also make it dangerous.

Its unique ability to create has opened up an entirely new set of security and privacy concerns.

Enterprises are suddenly having to ask themselves new questions: Do I have the rights to the training data? To the model? To the outputs? Does the system itself have rights to data that’s created in the future? How are rights to that system protected? How do I govern data privacy in a model using generative AI? The list goes on.

It’s no surprise that many enterprises are treading lightly. Blatant security and privacy vulnerabilities coupled with a hesitancy to rely on existing Band-Aid solutions have pushed many to ban these tools entirely. But there is hope.

Confidential computing — a new approach to data security that protects data while in use and ensures code integrity — is the answer to the more complex and serious security concerns of large language models (LLMs). It’s poised to help enterprises embrace the full power of generative AI without compromising on safety. Before I explain, let’s first take a look at what makes generative AI uniquely vulnerable.

Generative AI has the capacity to ingest an entire company’s data, or even a knowledge-rich subset, into a queryable intelligent model that provides brand new ideas on tap. This has massive appeal, but it also makes it extremely difficult for enterprises to maintain control over their proprietary data and stay compliant with evolving regulatory requirements.

Protecting training data and models must be the top priority; it’s no longer sufficient to encrypt fields in databases or rows on a form.

This concentration of knowledge and subsequent generative outcomes, without adequate data security and trust control, could inadvertently weaponize generative AI for abuse, theft, and illicit use.

Indeed, employees are increasingly feeding confidential business documents, client data, source code, and other pieces of regulated information into LLMs. Since these models are partly trained on new inputs, this could lead to major leaks of intellectual property in the event of a breach. And if the models themselves are compromised, any content that a company has been legally or contractually obligated to protect might also be leaked. In a worst-case scenario, theft of a model and its data would allow a competitor or nation-state actor to duplicate everything and steal that data.

These are high stakes. Gartner recently found that 41% of organizations have experienced an AI privacy breach or security incident—and over half are the result of a data compromise by an internal party. The advent of generative AI is bound to grow these numbers.

Separately, enterprises also need to keep up with evolving privacy regulations when they invest in generative AI. Across industries, there’s a deep responsibility and incentive to stay compliant with data requirements. In healthcare, for example, AI-powered personalized medicine has huge potential when it comes to improving patient outcomes and overall efficiency. But providers and researchers will need to access and work with large amounts of sensitive patient data while still staying compliant, presenting a new quandary.

To address these challenges, and the rest that will inevitably arise, generative AI needs a new security foundation. Protecting training data and models must be the top priority; it’s no longer sufficient to encrypt fields in databases or rows on a form.

In scenarios where generative AI outcomes are used for important decisions, evidence of the integrity of the code and data—and the trust it conveys—will be absolutely critical, both for compliance and for potentially legal liability management. There must be a way to provide airtight protection for the entire computation and the state in which it runs.

The advent of “confidential” generative AI

Confidential computing offers a simple, yet hugely powerful way out of what would otherwise seem to be an intractable problem. With confidential computing, data and IP are completely isolated from infrastructure owners and made only accessible to trusted applications running on trusted CPUs. Data privacy is ensured through encryption, even during execution.

Data security and privacy become intrinsic properties of cloud computing—so much so that even if a malicious attacker breaches infrastructure data, IP and code are completely invisible to that bad actor. This is perfect for generative AI, mitigating its security, privacy, and attack risks.

Confidential computing has been increasingly gaining traction as a security game-changer. Every major cloud provider and chip maker is investing in it, with leaders at Azure, AWS, and GCP all proclaiming its efficacy. Now, the same technology that’s converting even the most steadfast cloud holdouts could be the solution that helps generative AI take off securely. Leaders must begin to take it seriously and understand its profound impacts.

With confidential computing, enterprises gain assurance that generative AI models only learn on data they intend to use, and nothing else. Training with private datasets across a network of trusted sources across clouds provides full control and peace of mind. All information, whether an input or an output, remains completely protected, and behind a company’s own four walls.

How confidential computing could secure generative AI adoption by Walter Thompson originally published on TechCrunch



from TechCrunch
via Click me for Details

Fund of funds are starting to play a different role for venture LPs

Fund of funds (FoF) were created to serve as a bridge for LPs to get access to managers they couldn’t back otherwise. But in an environment where funds are not seeing consistent support from their existing LPs, and there are more venture funds than ever, is their role still relevant?

Fund of funds fundraising — say that five times fast! — has declined for years. To compare, traditional U.S. venture firm fundraising set a record in 2022 with $162 billion. U.S.-based VC FoF raised just $400 million in the first quarter of 2023, according to PitchBook, and $3 billion in 2022. This compares to $24.4 billion in 2021 and $33.7 billion — the fundraising peak — in 2017.

It’s not surprising why many LPs have soured on the strategy, said Kyle Stanford, a senior venture analyst at PitchBook. For one, backers of these funds pay a mix of fees to both the FoF and the underlying commitments the FoF manager makes.

“LPs have that double layer of fees. And that extra time it takes after [an LP] invests in the fund of funds and then have it deployed is just something that LPs right now just don’t want to deal with,” Stanford told TechCrunch+.

And with there being so many new firms and funds in the market, the issues surrounding LPs not getting access to attractive VC funds is largely moot and that barrier isn’t really an issue anymore, he said. “There has been way more opportunity to invest in a VC than there has ever been in the past,” he said. “For new LPs coming into the market, they didn’t need to go to a fund of funds to get access.”

But to be clear, even if the funding numbers are down, FoF still holds a place in the future of venture — maybe just a different one than they did traditionally. Multiple firms have started innovating on the model, and FoF can still help LPs get access to the managers they can’t invest in otherwise, albeit for different reasons than before.

Fund of funds are starting to play a different role for venture LPs by Rebecca Szkutak originally published on TechCrunch



from TechCrunch
via Click me for Details

Say goodbye to Q2 and the crypto hacks scams and rug pulls that came with it

Follow me on Twitter @Jacqmelinek for breaking crypto news, memes and more.

Welcome back to Chain Reaction.

As if the pessimism around crypto weren’t enough, the industry is facing yet another quarter of hackers and scammers looking to make a quick buck. And to make things worse, it’s getting harder to trace and recover lost funds as well.

According to a new report, only $4.9 million was recovered of the $204.3 million the industry lost to hacks, scams and rug pulls in the second quarter.

The report, by web3 “super app” and antivirus solution De.Fi and data from REKT database, detailed that so far this year, the industry had recovered about $183 million, or nearly 28% of the $666.5 million lost to scams and hacks.

The report also found that exploits and rug pulls accounted for $55.3 million and $47.3 million, respectively, in Q2, highlighting that risks through bad actors are “rampant in equal measure.”

The TLDR? Be careful out there, because hackers are still hackin’ and scammers are still scammin’ — even in a bear market.

This week in web3

Q2 failed to bring a funding reprieve for web3 startups and unicorns (TC+)

We’re already halfway into 2023, which means we’re only a couple weeks away from brand new, sizzling data for the second quarter. However, it’s always wise to keep an eye on the horizon, so we’ve decided to draw the few conclusions about web3 and unicorn funding trends that we can from early data on the past three months.

Coinbase execs: As global crypto policy grows, U.S. has urgent need for legislation (TC+)

Coinbase, one of the largest crypto exchanges globally, has been around for 10 years. And while the company has grown its offerings, products and services, its policy talking points haven’t changed dramatically, Kara Calvert, head of U.S. policy at Coinbase told TechCrunch+. But what has changed, she said, is the “momentum and urgency” for digital asset legislation and rules at a federal level in the U.S.

AI and crypto integration is going to happen whether you want it or not (TC+)

As artificial intelligence continues to grow to new heights of popularity, industry players are considering new ways the technology could integrate with crypto and blockchains. During Coinbase’s State of Crypto Summit on Thursday, venture capitalists and AI experts shared their thoughts during a panel on what similarities and differences the industries have and how investors, builders and users can capitalize on it.

Crypto startup Pillow, backed by Accel and Quona, to discontinue all services

Singapore-headquartered Pillow plans to discontinue all its services and app in the coming weeks, it warned customers Friday, citing regulatory uncertainty that has claimed countless other crypto startups in recent quarters. It had raised about $21 million altogether and counted Accel India, Quona Capital, Elevation Capital and Jump Crypto among its backers. Pillow revealed its $18 million Series A funding in October last year.

Terraform Labs founder Do Kwon jailed four months in Montenegro

Another chapter was published in the long and bizarre saga of Terraform Labs’ Do Kwon. The disgraced crypto founder will spend four months in a jail in Montenegro for falsifying official documents. The next step for Kwon is still unclear as both the U.S. and South Korea have been seeking to extradite him over charges in both countries relating to the collapse of Terraform Labs.

The latest pod

For this week’s episode, Jacquelyn interviewed Jack Lu, co-founder and CEO of NFT marketplace Magic Eden. This is his second time on Chain Reaction, but the market has evolved a lot since the last time he came on in August 2022, so we’re excited to have him back!

Before co-founding Magic Eden in 2021, Lu worked as a product manager at Google and a consultant for Boston Consulting Group.

Magic Eden originally began as a Solana-based NFT trading platform, but has expanded its support to other blockchain networks like Polygon, Ethereum and Bitcoin. Today, it has grown into one of the largest NFT marketplaces, with over 8,000 collections, about $3 billion in NFT transactions and 22 million unique monthly visitors. In June 2022, Magic Eden raised $130 million in a Series B round that granted it unicorn status.

We discussed why Magic Eden expanded its support to other blockchains, adding BRC-20 token support to its secondary platform and how the company plans on staying competitive in the constantly changing market.

We also talked about:

  • NFT market volatility
  • Royalty fees
  • Web3 gaming expansion
  • Advice for NFT community

Subscribe to Chain Reaction on Apple Podcasts, Spotify or your favorite pod platform to keep up with the latest episodes, and please leave us a review if you like what you hear!

Follow the money

  1. Bitpanda’s crypto exchange separated from Bitpanda and secured $33 million
  2. Gaming platform Mythical Games raised $37 million in an extended Series C1 round
  3. Web3 gaming platform Pixion Games raised $5.5 million
  4. AI-powered crypto search engine Kaito raised $5.5 million in a Series A round
  5. Startale Labs raised $3.5 million in a seed round for web3 infrastructure for public goods

This list was compiled with information from Messari as well as TechCrunch’s own reporting.

To get a roundup of TechCrunch’s biggest and most important crypto stories delivered to your inbox every Thursday at 12 p.m. PT, subscribe here.

Say goodbye to Q2 and the crypto hacks, scams and rug pulls that came with it by Jacquelyn Melinek originally published on TechCrunch



from TechCrunch
via Click me for Details

Nancy Mace Endorses Bus Project She Voted Against

She voted against the funding but showed up at the press conference to take credit anyway.

Source: Post and Courier

NORTH CHARLESTON — A routine press conference on a federal grant for Charleston’s bus system put Republican U.S. Rep. Nancy Mace on the defensive after Democrats pounced on the fact she actually voted against the bill that made it happen.

While Mace voted against the 2021 Bipartisan Infrastructure Act, even calling it a “fiasco” and “socialist wish list,” she appeared at the June 28 press event in support of the local effort.

The law brought a nearly $26 million grant for a regional transit hub and will help the Charleston Area Regional Transportation Authority transition to a fully electric bus fleet by 2040.

Mace, who joined the majority of Republicans who opposed the bill, defended her support for the transit center by saying anything that benefits the Charleston area, minus the politics, she backs.

“What do you want me to do, turn my back on the Lowcountry when we get funding for public transit? Absolutely not,” Mace said when asked about the optics of the moment.

Twitter with the explainer note on her crass antics.

read more



from Latest articles from Crooks and Liars
via Click me for Details

Wednesday, 28 June 2023

Kayleigh McEnany Puts On Her Best MAGA Suit To 'Whatabout'

Former Trump White House press secretary Kayleigh McEnany took the concept of "whataboutism" to the farthest depths of propaganda trying to defend the tiny-fingered, cheeto-faced, ferret-wearing shitgibbon.

Kayleigh joined Hannity to discuss the damning audio CNN released proving Trump was showing top-secret documents about a US attack plan against Iran to authors writing a book who were not authorized to see them..

McEnany made believe she was going to be fair and impartial.

"Yeah, all those questions will be answered in a court of law. I do agree with Ari. We have to look at and respect classification. It's important for national security," she said. "It's important for the people who gather and collect that information and sometimes put their lives on the line doing so, oftentimes, I should say."

Then she put on her MAGA hat and said these charges would never be brought against any other candidate except for Trump.

I agree with that because Trump is the most corrupt and immoral man maybe to ever occupy the White House.

She then pivoted.

"I have real questions about Joe Biden, not just taking classified information when he had no declassification authority, but taking them during a time as senator when he had no right to take them, when he viewed them in a SCIF and somehow they exited with him," she said. "And President Trump is right to point to many of those double standards."

read more



from Latest articles from Crooks and Liars
via Click me for Details

Age of AI: Everything you need to know about artificial intelligence

AI is appearing in seemingly every corner of modern life, from music and media to business and productivity, even dating. There’s so much it can be hard to keep up — so read on to find out everything from the latest big developments to the terms and companies you need to know in order to stay current in this fast-moving field.

To begin with, let’s just make sure we’re all on the same page: what is AI?

Artificial intelligence, also called machine learning, is a kind of software system based on neural networks, a technique that was actually pioneered decades ago but very recently has blossomed thanks to powerful new computing resources. AI has enabled effective voice and image recognition, as well as the ability to generate synthetic imagery and speech. And researchers are hard at work making it possible for an AI to browse the web, book tickets, tweak recipes and more.

Oh, but if you’re worried about a Matrix-type rise of the machines — don’t be. We’ll talk about that later!

Our guide to AI has three main parts, each of which we will update regularly and can be read in any order:

  • First, the most fundamental concepts you need to know as well as more recently important ones.
  • Next, an overview of the major players in AI and why they matter.
  • And last, a curated list of the recent headlines and developments that you should be aware of.

By the end of this article you’ll be about as up to date as anyone can hope to be these days. We will also be updating and expanding it as we press further into the age of AI.

AI 101

Deep learning artificial neural networks that form shape as human brain. Neural network handles data on input and gives result on output

Image Credits: Andrii Shyp / Getty Images

One of the wild things about AI is that although the core concepts date back more than 50 years, few of them were familiar to even the tech-savvy before very recently. So if you feel lost, don’t worry — everyone is.

And one thing we want to make clear up front: Although it’s called “artificial intelligence,” that term is a little misleading. There’s no one definition of intelligence out there, but what these systems do is definitely closer to calculators than brains. The input and output of this calculator is just a lot more flexible. You might think of artificial intelligence like artificial coconut — it’s imitation intelligence.

With that said, here are the basic terms you’ll find in any discussion of AI.

Neural network

Our brains are largely made of interconnected cells called neurons, which mesh together to form complex networks that perform tasks and store information. Recreating this amazing system in software has been attempted since the ’60s, but the processing power required wasn’t widely available until 15-20 years ago, when GPUs let digitally defined neural networks flourish. At their heart they are just lots of dots and lines: the dots are data and the lines are statistical relationships between those values. As in the brain, this can create a versatile system that quickly takes an input, passes it through the network and produces an output. This system is called a model.

Model

The model is the actual collection of code that accepts inputs and returns outputs. The similarity in terminology to a statistical model or a modeling system that simulates a complex natural process is not accidental. In AI, model can refer to a complete system like ChatGPT, or pretty much any AI or machine learning construct, whatever it does or produces. Models come in various sizes, meaning both how much storage space they take up and how much computational power they take to run. And these depend on how the model is trained.

Training

To create an AI model, the neural networks making up the base of the system are exposed to a bunch of information in what’s called a dataset or corpus. In doing so, these giant networks create a statistical representation of that data. This training process is the most computation-intensive part, meaning it takes weeks or months (you can kind of go as long as you want) on huge banks of high-powered computers. The reason for this is that not only are the networks complex, but datasets can be extremely large: billions of words or images that must be analyzed and given representation in the giant statistical model. On the other hand, once the model is done cooking it can be much smaller and less demanding when it’s being used, a process called inference.

Image Credits: Google

Inference

When the model is actually doing its job, we call that inference, very much the traditional sense of the word: stating a conclusion by reasoning about available evidence. Of course it is not exactly “reasoning,” but statistically connecting the dots in the data it has ingested and, in effect, predicting the next dot. For instance, saying “Complete the following sequence: red, orange, yellow…” it would find that these words correspond to the beginning of a list it has ingested, the colors of the rainbow, and infers the next item until it has produced the rest of that list. Inference is generally much less computationally costly than training: Think of it like looking through a card catalog rather than assembling it. Big models still have to run on supercomputers and GPUs, but smaller ones can be run on a smartphone or something even simpler.

Generative AI

Everyone is talking about generative AI, and this broad term just means an AI model that produces an original output, like an image or text. Some AIs summarize, some reorganize, some identify, and so on — but an AI that actually generates something (whether or not it “creates” is arguable) is especially popular right now. Just remember that just because an AI generated something, that doesn’t mean it is correct, or even that it reflects reality at all! Only that it didn’t exist before you asked for it, like a story or painting.

Today’s top terms

Beyond the basics, here are the AI terms that are most relevant in mid-2023.

Large language model

The most influential and versatile form of AI available today, large language models are trained on pretty much all the text making up the web and much of English literature. Ingesting all this results in a foundation model (read on) of enormous size. LLMs are able to converse and answer questions in natural language and imitate a variety of styles and types of written documents, as demonstrated by the likes of ChatGPT, Claude and LLaMa. While these models are undeniably impressive, it must be kept in mind that they are still pattern recognition engines, and when they answer it is an attempt to complete a pattern it has identified, whether or not that pattern reflects reality. LLMs frequently hallucinate in their answers, which we will come to shortly.

If you want to learn more about LLMs and ChatGPT, we have a whole separate article on those!

Foundation model

Training a huge model from scratch on huge datasets is costly and complex, and so you don’t want to have to do it any more than you have to. Foundation models are the big from-scratch ones that need supercomputers to run, but they can be trimmed down to fit in smaller containers, usually by reducing the number of parameters. You can think of those as the total dots the model has to work with, and these days it can be in the millions, billions or even trillions.

Fine tuning

A foundation model like GPT-4 is smart, but it’s also a generalist by design — it absorbed everything from Dickens to Wittgenstein to the rules of Dungeons & Dragons, but none of that is helpful if you want it to help you write a cover letter for your resumé. Fortunately, models can be fine tuned by giving them a bit of extra training using a specialized dataset, for instance a few thousand job applications that happen to be laying around. This gives the model a much better sense of how to help you in that domain without throwing away the general knowledge it has collected from the rest of its training data.

Reinforcement learning from human feedback, or RLHF, is a special kind of fine tuning you’ll hear about a lot — it uses data from humans interacting with the LLM to improve its communication skills.

Diffusion

From a paper on an advanced post-diffusion technique, you can see how an image can be reproduced from even very noisy data. Image Credits: OpenAI

Image generation can be done in numerous ways, but by far the most successful as of today is diffusion, which is the technique at the heart of Stable Diffusion, Midjourney and other popular generative AIs. Diffusion models are trained by showing them images that are gradually degraded by adding digital noise until there is nothing left of the original. By observing this, diffusion models learn to do the process in reverse as well, gradually adding detail to pure noise in order to form an arbitrarily defined image. We’re already starting to move beyond this for images, but the technique is reliable and relatively well understood, so don’t expect it to disappear any time soon.

Hallucination

Originally this was a problem of certain imagery in training slipping into unrelated output, such as buildings that seemed to be made of dogs due to an an over-prevalence of dogs in the training set. Now an AI is said to be hallucinating when, because it has insufficient or conflicting data in its training set, it just makes something up.

This can be either an asset or a liability; an AI asked to create original or even derivative art is hallucinating its output; an LLM can be told to write a love poem in the style of Yogi Berra, and it will happily do so — despite such a thing not existing anywhere in its dataset. But it can be an issue when a factual answer is desired; models will confidently present a response that is half real, half hallucination. At present there is no easy way to tell which is which except checking for yourself, because the model itself doesn’t actually know what is “true” or “false,” it is only trying to complete a pattern as best it can.

AGI or strong AI

Artificial General Intelligence, or strong AI, is not really a well-defined concept, but the simplest explanation is that it is an intelligence that is powerful enough not just to do what people do, but learn and improve itself like we do. Some worry that this cycle of learning, integrating those ideas, and then learning and growing faster will be a self-perpetuating one that results in a super-intelligent system that is impossible to restrain or control. Some have even proposed delaying or limiting research to forestall this possibility.

It’s a scary idea, sure, and movies like “The Matrix” and “Terminator” have explored what might happen if AI spirals out of control and attempts to eliminate or enslave humanity. But these stories are not grounded in reality. The appearance of intelligence we see in things like ChatGPT is an impressive act, but has little in common with the abstract reasoning and dynamic multi-domain activity that we associate with “real” intelligence. While it’s near-impossible to predict how things will progress, it may be helpful to think of AGI as something like interstellar space travel: We all understand the concept and are seemingly working toward it, but at the same time we’re incredibly far from achieving anything like it. And due to the immense resources and fundamental scientific advances required, no one is going to just suddenly accomplish it by accident!

AGI is interesting to think about, but there’s no sense borrowing trouble when, as commentators point out, AI is already presenting real and consequential threats today despite, and in fact largely due to, its limitations. No one wants Skynet, but you don’t need a superintelligence armed with nukes to cause real harm: people are losing jobs and falling for hoaxes today. If we can’t solve those problems, what chance do we have against a T-1000?


Top players in AI

OpenAI

ChatGPT welcome screen

Image Credits: Leon Neal / Getty Images

If there’s a household name in AI, it’s this one. OpenAI began as its name suggests, an organization intending to perform research and provide the results more or less openly. It has since restructured as a more traditional for-profit company providing access to its advances language models like ChatGPT through APIs and apps. It’s headed by Sam Altman, a technotopian billionaire who nonetheless has warned of the risks AI could present. OpenAI is the acknowledged leader in LLMs but also performs research in other areas.

Microsoft

As you might expect, Microsoft has done its fair share of work in AI research, but like other companies has more or less failed to turn its experiments into major products. Its smartest move was to invest early in OpenAI, which scored it an exclusive long-term partnership with the company, which now powers its Bing conversational agent. Though its own contributions are smaller and less immediately applicable, the company does have a considerable research presence.

Google

Known for its moonshots, Google somehow missed the boat on AI despite its researchers literally inventing the technique that led directly to today’s AI explosion: the transformer. Now it’s working hard on its own LLMs and other agents, but is clearly playing catch-up after spending most of its time and money over the last decade boosting the outdated “virtual assistant” concept of AI. CEO Sundar Pichai has repeatedly said that the company is aligning itself firmly behind AI in search and productivity.

Anthropic

After OpenAI pivoted away from openness, siblings Dario and Daniela Amodei left it to start Anthropic, intended to fill the role of an open and ethically considerate AI research organization. With the amount of cash they have on hand, they’re a serious rival to OpenAI even if their models, like Claude, aren’t as popular or well-known yet.

Stability

Image Credits: Bryce Durbin / TechCrunch

Controversial but inevitable, Stability represents the “do what thou wilt” open source school of AI implementation, hoovering up everything on the internet and making the generative AI models it trains freely available if you have the hardware to run it. This is very in line with the “information wants to be free” philosophy but has also accelerated ethically dubious projects like generating pornographic imagery and using intellectual property without consent (sometimes at the same time).

Elon Musk

Not one to be left out, Musk has been outspoken about his fears regarding out-of-control AI, as well as some sour grapes after he contributed to OpenAI early on and it went in a direction he didn’t like. While Musk is not an expert on this topic, as usual his antics and commentary do provoke widespread responses (he was a signatory on the above-mentioned “AI pause” letter) and he is attempting to start a research outfit of his own.


Latest stories in AI

China might further lose chip access in new US ban

The U.S. Department of Commerce could prohibit shipments of chips from manufacturers including Nvidia to customers in China as soon as early next month (July).

The latest move to weigh additional restrictions on AI chip export to China is part of the U.S.’s broader strategy to limit China’s progress in AI, particularly in the military sphere. However, these measures are also having an adverse impact on the commercial AI sector in China, where many firms operate with teams that span both the U.S. and China.

ChatGPT uses Bing and Bing uses ChatGPT

ChatGPT Plus subscribers can now access a new feature on the ChatGPT app called Browsing to have ChatGPT search Bing for answers to prompts or questions. OpenAI says that the Browsing feature is particularly useful for queries relating to current events and other information that “extend[s] beyond [ChatGPT’s] original training data.” When Browsing is disabled, ChatGPT’s knowledge cuts off in 2021.

AI can’t win a Grammy

If a musician’s AI-assisted composition is to be eligible for a Grammy, they’ll need to make sure that their human contribution is “meaningful and more than de minimis,” the rules now state. An update to Grammy awards’ eligibility criteria states that “[o]nly human creators are eligible to be submitted for consideration,” and that “[a] work that contains no human authorship is not eligible in any Categories.”

Google-owned research lab DeepMind claims its next chatbot will rival ChatGPT

DeepMind is using techniques from AlphaGo, DeepMind’s AI system that was the first to defeat a professional human player at the board game Go, to make a ChatGPT-rivaling chatbot called Gemini. If all goes according to plan, Gemini will have the ability to plan or solve problems as well as analyze text, DeepMind CEO Demis Hassabis told Wired’s Will Knight.

Inflection debuts its own foundation AI model to rival Google and OpenAI

The well-funded AI startup took the wraps off the large language model powering its Pi conversational agent. The model is called Inflection-1 is of roughly GPT-3.5 size and capabilities, as measured in the computing power used to train them. According to the results they published, Inflection-1 indeed performs well on various measures, like middle- and high school-level exam tasks (think biology 101) and “common sense” benchmarks (things like “if Jack throws the ball on the roof, and Jill throws it back down, where is the ball?”). It mainly falls behind on coding, where GPT-3.5 beats it handily and, for comparison, GPT-4 smokes the competition; OpenAI’s biggest model is well known to have been a huge leap in quality there, so it’s no surprise.

Salesforce pledges to invest $500M in AI startups

Salesforce announced that it’s growing its Generative AI Fund from $250 million in size to $500 million. The Generative AI fund has already invested in several firms on the frontier of generative AI tech since launching in March. While far from the only fund investing primarily in generative AI, Salesforce aims to differentiate its tranche by prioritizing what it describes as “ethical” AI technologies.

Nvidia becomes a trillion-dollar company

GPU maker Nvidia was doing fine selling to gamers and cryptocurrency miners, but the AI industry put demand for its hardware into overdrive. The company has cleverly capitalized on this and the other day broke the symbolic (but intensely so) trillion-dollar market cap when its stock hit $413. They show no sign of slowing down, as they showed recently at Computex…

At Computex, Nvidia redoubles commitment to AI

Among a dozen or two announcements at Computex in Taipei, Nvidia CEO Jensen Huang talked up the company’s Grace Hopper superchip for accelerated computing (their terminology) and demoed generative AI that it claimed could turn anyone into a developer.

OpenAI’s Sam Altman lobbies the world on AI’s behalf

Altman was recently advising the U.S. government on AI policy, though some saw this as letting the fox set the rules of the henhouse. The E.U.’s various rulemaking bodies are also looking for input and Altman has been doing a grand tour, warning simultaneously against excessive regulation and the dangers of unfettered AI. If these perspectives seem opposed to you… don’t worry, you’re not the only one.

Anthropic raises $450 million for its new generation of AI models

We kind of spoiled this news for them when we published details of this fundraise and plan ahead of them, but Anthropic is now officially $450 million richer and hard at work on the successor to Claude and its other models. It’s clear the AI market is large enough that there’s room at the top for a few major providers — if they have the capital to get there.

Tiktok is testing its own in-app AI called Tako

Video social networking platform Tiktok is testing a new conversational AI that you can ask about whatever you want, including what you’re watching. The idea is instead of just searching for more “husky howling” videos, you could ask Tako “why do huskies howl so much?” and it will give a useful answer as well as point you towards more content to watch.

Microsoft is baking ChatGPT into Windows 11

After investing hundreds of millions into OpenAI, Microsoft is determined to get its money’s worth. It’s already integrated GPT-4 into its Bing search platform, but now that Bing chat experience will be available — indeed, probably unavoidable — on every Windows 11 machine via an right-side bar across the OS.

Google adds a sprinkle of AI to just about everything it does

Google is playing catch-up in the AI world, and although it is dedicating considerable resources to doing so, its strategy is still a little murky. Case in point: its I/O 2023 event was full of experimental features that may or may not ever make it to a broad audience. But they’re definitely doing a full court press to get back in the game.

Age of AI: Everything you need to know about artificial intelligence by Devin Coldewey originally published on TechCrunch



from TechCrunch
via Click me for Details

ChatGPT prompts: How to optimize for sales marketing writing and more

ChatGPT, OpenAI’s AI-powered chatbot, has taken the world by storm.

Capable of writing emails, essays and more given a few short prompts, ChatGPT has become one of the fastest-growing apps in history. Beyond that, it’s begun to find a place in the enterprise, particularly with the launch of plugins that connect the chatbot to third-party apps, websites and services. Most recently, ChatGPT Plus subscribers now have access to a new feature called Browsing, which allows ChatGPT to search Bing for answers to prompts and questions.

But ChatGPT isn’t always the most cooperative assistant. Getting it to output something specific requires careful fine-tuning of the prompts.

A number of resources and guides for ChatGPT prompt writing have sprung up since the tool’s launch. But not all of them are especially easy to follow — or intuitive. To help folks both new to ChatGPT and looking to learn new tricks, we’ve compiled a list of the best ChatGPT prompts for different types of workflows — specifically writing, marketing, sales, students and tech enthusiasts.

The best ChatGPT prompts for sales

No one likes to write sales emails. No one. And while there’s plenty in the way of tools to tackle the task, many rely on templates with inflexible, repetitive language. Not so with ChatGPT.

When writing sales prompts for ChatGPT, though, the wording really matters. For example, consider the prompt:

Write a concise and informal cold email to a sales lead.

Compare it to:

Write a cold email to a sales lead.

You’ll notice that the results for the first, far more descriptive prompt are better — objectively better — than the results for the second. While not perfect, they’re a much better starting point for something, well, sendable.

You can take the ChatGPT prompt fine-tuning further. Let’s say you want copy for LinkedIn prospecting emails — LinkedIn being a great place to look for sales leads (as many marketers know). Try a prompt like:

John’s Linkedin summary: [insert text here] Write a cold email to Katie, who I just found on LinkedIn.

Katie Paterson over at the Zapier blog gave it a shot. The result was impressively personalized — and a lot better than most of the sales spam I’ve gotten over the years, truth be told.

ChatGPT needn’t be confined to the email realm. Vidyard writes about how the tool can be used to automate cold call scripts or sales pitch processes. Try something like:

Write a sales pitch for a marketing consultant offering solutions to small businesses struggling with low online visibility and poor search engine rankings.

Again, you’ll most likely have to tweak the results. But undeniably, it’s a time saver.

The best ChatGPT prompts for marketing

ChatGPT is an excellent marketing tool — or can be, if you use the right set of prompts. As with writing, it requires knowing in which specific ways to prompt the model so that it understands your intention.

As any online marketer knows, keywords are an important part of the puzzle. Fortunately, ChatGPT’s a competent keyword generator. Use the prompt:

Generate a list of keywords for [insert text here], including long-tail and high-performing keywords.

That’ll provide a decent starting prompt for whatever copy you’re trying to write.

Speaking of brainstorming copy, it’s no secret that ChatGPT can come in handy here, too — whether it’s for an ad or social media post. For example, take a look at this prompt from Tory Wenger over at Madgicx, which really illustrates the degree of specificity ChatGPT will accept:

Craft a compelling ad copy for our Facebook ad campaign, targeting users who have previously visited our website and creating a sense of urgency, as well as adding our offer for exclusive promotion to entice them to take action. The offer is [insert text here].

It’s as easy as that.

ChatGPT can also give marketing and brand advice, believe it or not, answering tough questions with surprising depth and nuance. WordStream’s Gordon Donnelly asked ChatGPT how to respond to negative comments and publicity:

As a social media marketing manager, how do I respond to people that are writing negative things about my products on Twitter?

ChatGPT’s response? A diplomatically worded email asking for feedback on a product, using wording like “Your feedback is essential to us” and “we want to make sure we’re exceeding your expectations.” Talk about measured!

The best ChatGPT prompts for writing

When it comes to writing, ChatGPT can be a useful companion indeed — serving as a brainstorming tool or streamlining the more monotonous bits of the writing process. But the chatbot isn’t always the most steerable or predictable unless you use very specific prompt wording.

For example, “priming” ChatGPT can set the tone and context. Try a prompt like:

“I’m a tech blogger and I need your help writing a blog post. The topic is CES. This post should be helpful for people who are interested in new and upcoming smartphones. Do not start writing yet. Do you understand?”

That’ll “ground” the tool, providing ChatGPT context for future questions.

Another nifty tip is using bullet points to guide ChatGPT as it writes. Try using a prompt like:

Write an introduction based on the bullet points below:

  • This is an article about a new tech product — a wireless air fryer.
  • The product costs $20.
  • The product will be available for sale on June 16.

Given a moment, ChatGPT will generate something coherent that incorporates details from each of the bullets.

ChatGPT can also be “taught” to mimic style, voice and tone — a useful feature in instances where you’re trying to have it complete parts of an article or essay. Trying entering this prompt:

Analyze the text below for style, voice and tone. Create a prompt to write a new paragraph in the same style, voice and tone. [insert text here]

It might not always get it right. But when instructed to write this way, ChatGPT is much more likely to produce something usable — and insightful.

The best ChatGPT prompts for students

Not every academic institution is on board with the idea of using ChatGPT as a writing tool — or even writing aid. But others are — and have gone to great lengths to incorporate ChatGPT into their curriculums. This writer supports the latter camp, but would advise students against using ChatGPT where prohibited by an instructor. You’ve been warned.

The sky’s the limit, really, when it comes to education-focused ChatGPT prompts. It really depends on the task at hand and the nature of the work. You could try, for instance, a prompt like this:

Help me write a research paper on the causes of the American Revolution.

Or a prompt like:

Can you help me explain the significance of the Magna Carta?

And ChatGPT will do its best to respond in a way that makes sense — if not perfect sense.

A word of warning when asking ChatGPT for facts and figures: It doesn’t always get it right. Sometimes, thanks to a phenomenon known as hallucination, the chatbot invents things — very confidently — out of whole cloth. That’s why it’s wise to fact-check answers from ChatGPT before pasting them into a piece.

Once again, ChatGPT can be asked to do more than simply write an essay or answer basic topical questions. Consider this prompt:

Help me create a study plan for my upcoming exams in history and political science.

You’ll need to be more specific than “history and political science,” lest the advice be overly broad. But ChatGPT — while it won’t do the studying for you — should provide a reasonable starting point.

The best ChatGPT prompts for tech enthusiasts

We’ve established that ChatGPT is a fine writer. But did you know that it’s a coder, too, and a mathematician?

Say you want to create a basic web form to collect contact information. ChatGPT will happily do that for you with a prompt like:

Act as a JavaScript Developer, Write a program that checks the information on a form. Name and email are required, but address and age are not.

The resulting code may contain some mistakes. ChatGPT certainly isn’t perfect. But it should be a reasonable starting point.

In a more sophisticated use case, ChatGPT can write database queries for applications — a task that normally takes a fair amount of time (and, sometimes, trial and error). Try this prompt for MySQL, one of the more popular relational database systems:

Write a MySQL Query

Tables: users and orders

Requirement: It should give user details who placed highest order today

Again, the results won’t be usable out of the box necessarily. But they’ll help you to get where you need to be.

Same goes for math questions. One of my favorite recent prompts from PromptHero, an AI prompt database, is this:

I want you to act as a math teacher. I will provide some mathematical equations or concepts, and it will be your job to explain them in easy-to-understand terms. This could include providing step-by-step instructions for solving a problem, demonstrating various techniques with visuals or suggesting online resources for further study. My first request is “I need help understanding how probability works.”

ChatGPT, you’ll find, can be a surprisingly thoughtful tutor.

ChatGPT prompts: How to optimize for sales, marketing, writing and more by Kyle Wiggers originally published on TechCrunch



from TechCrunch
via Click me for Details

Celestial AI raises $100M to transfer data using light-based interconnects

David Lazovsky and Preet Virk, technologists with backgrounds in semiconductor engineering and photonics, came to the joint realization several years ago that AI and machine learning workloads would quickly encounter a “data movement” problem. Increasingly, they predicted, it would become challenging to move data to and from compute hardware as AI models scaled past what could be kept on the die of any one memory chip.

Their solution — architected by Phil Winterbottom, previously a researcher at the distinguished Bell Labs — was an optical interconnect technology for compute-to-compute, compute-to-memory and on-chip data transmission. Along with Winterbottom, Lazovsky and Virk founded a startup, Celestial AI, to commercialize the tech. And now, that startup is attracting big backers.

Celestial AI today announced that it raised $100 million in a Series B round led by by IAG Capital Partners, Koch Disruptive Technologies and Temasek’s Xora Innovation fund. The tranche, which brings Celestial AI’s total raised to more than $165 million, will be used to support the production of Celestial’s photonics platform by expanding the company’s engineering, sales and technical marketing departments, according to CEO Lazovsky.

Celestial has around 100 employees at present — a number that Lazovsky expects will grow to 130 by the end of the year.

“Today, compute and memory are closely coupled. The only way to add more high bandwidth memory is to add more compute, whether the additional compute is required or not,” Lazovsky told TechCrunch via email. “Celestial’s tech enables memory disaggregation.”

In a data center, memory is often one of the most expensive resources — in part because it’s not always used efficiently. Because memory is tied to compute, it’s challenging — and sometimes impossible, due to bandwidth constraints and sky-high latency — for operators to “disaggregate” and pool the memory across hardware within the data center.

According to an internal Microsoft study, up to 25% of memory in Azure is “stranded,” or left over, after the servers’ cores have been rented to virtual machines. Reducing this stranded memory could cut data center costs by 4% to 5%, the company estimated — potentially significant savings in the context of a multibillion-dollar operation.

Celestial — which began as a portfolio company of The Engine, the VC firm spun out of MIT in 2016 — developed an ostensible solution in its photonics-based architecture, which scales across multiple-chip systems. Using light to transfer data, Celestial’s tech can beam information both within chips and chip-to-chip, making both memory and compute available for AI — and other — workloads.

Celestial

Image Credits: Celestial

Celestial also claims that its tech can reduce the amount of electricity necessary for data movement, indirectly boosting a chip’s performance. Typically, chips devote a portion of the electricity they draw to data movement between their circuits, which takes away from the electricity that the chip can direct to computing tasks. Celestial’s photonics reduce the power required for data movement, allowing a chip to — at least in theory — increase its compute power.

Celestial’s photonics tech, which is compatible with most industry interconnect standards (e.g. CXL, PCIe), delivers 25x higher bandwidth and 10x lower latency and power consumption than optical alternatives, the company asserts.

“With the growth in AI , especially large language models (LLMs) and recommendation engine workloads, there is a shift towards accelerated compute,” Lazovsky said. “The key problem going forward is memory capacity, memory bandwidth and data movement — i.e. chip-to-chip interconnectivity — which is what we are addressing with Celestial’s photonic fabric.”

Celestial is offering its interconnect product through a licensing program, and says that it’s engaged with several “tier-one” customer including hyperscalers and processor and memory companies.

The interconnect product appears to be priority number one for Celestial. Celestial sells its own AI accelerator chip, dubbed Orion, built on the company’s photonics architecture. But as investors told TechCrunch in a recent piece for TC+, AI photonics chips have yet to overcome engineering challenges that would make them practical at scale. Unless Celestial stumbled upon breakthroughs in the areas of data-to-analog conversion and signal regeneration — top stumbling blocks for today’s photonics chips — it’s unlikely that Orion is much further along than the competition.

Chip aside, Celestial has a number of competitors in a photonic integrated circuit market that could be worth $26.42 billion by 2027.

Ayar Labs, which makes chip solutions based on optical networking principles, has raised over $200 million in venture capital since its founding in 2015. Ravonus, another rival, recently landed a $73.9 million investment.

There could be consolidation ahead in the broader optical interconnection space, though. Around three years ago, Marvell bought Inphi, an optical networking specialist, for $10 billion. After a period of quiet, Microsoft last year acquired Lumenisity, a startup developing high-speed optical cables for data center and carrier networks.

Both Inphi and Luminensity were targeting different use cases with their tech. But the enthusiasm from Big Tech around optics and photonics is worth making a note of.

“Our photonics technology is truly differentiated and is unique with superior characteristics,” Lazovsky said. “Given the growth in generative AI workloads due to LLMs and the pressures it puts on current data center architectures, demand is increasing rapidly for optical connectivity to support the transition from general computing data center infrastructure to accelerating computing.”

Samsung Catalyst, Smart Global Holdings, Porsche Automobil Holding SE, The Engine Fund, imec.xpand, M Ventures and Tyche Partners also participated in Celestial’s Series B.

Celestial AI raises $100M to transfer data using light-based interconnects by Kyle Wiggers originally published on TechCrunch



from TechCrunch
via Click me for Details

Crypto losses halved in Q2 2023 to $204M

As if the pessimism around crypto wasn’t enough, the industry has historically been hounded by hackers and scammers looking to make a quick buck. To make things worse, it appears tracing and recovering lost funds is now getting harder than ever as attackers use increasingly sophisticated methods.

According to a new report, only $4.9 million was recovered of the $204.3 million the industry lost to hacks, scams and rug pulls in Q2 2023, and that was significantly less than the $6.9 million recovered in Q2 2022. However, the good news is that losses in the second quarter were 55% narrower than in Q1 2023, when the industry lost a whopping $462.3 million to hacks and scams, with the Euler Finance flash loan attack accounting for 42.4% of the first quarter’s losses, REKT’s database showed.

The report, by web3 “super app” and antivirus solution De.Fi with supporting data from the REKT database, detailed that so far this year, the industry had recovered about $183 million, or nearly 28% of the $666.5 million lost to scams and hacks.

A chart showing crypto funds lost and recovered in the first half of 2023 from De.Fi and REKT report

Image Credits: De.Fi, REKT

Q2 saw over 100 exploits

This quarter had 110 recorded cases of “scams, exploits or unintended losses,” the report stated. The three biggest cases were the Atomic Wallet breach at $35 million, Fintoch at $31.6 million for its alleged Ponzi scheme, and the exploit of a vulnerability in MEV Boost’s software that led it to lose $26.1 million. These three accounted for a combined $92.8 million, almost half of the total losses in the quarter.

Crypto losses halved in Q2 2023 to $204M by Jacquelyn Melinek originally published on TechCrunch



from TechCrunch
via Click me for Details