Walmart OKs ChatGPT for workers

This undated file photo shows Walmart's sign in front of its Bentonville headquarters.
This undated file photo shows Walmart's sign in front of its Bentonville headquarters.

Walmart Inc. now lets its workers use a popular artificial intelligence program called ChatGPT on their work computers, as long as they don't share any corporate or customer information on it.

Walmart Global Tech had previously blocked employees from using ChatGPT out of concern that they were sharing information that needed to stay private. But an employee memo leaked to some media organizations said the block has been lifted and new usage guidelines put in place.

A Walmart spokeswoman confirmed that the company reversed that policy in February, allowing workers to use the program, or "bot," from their work-issued devices.

"Most new technologies present new benefits as well as new risks," she said. "It's not uncommon for us to assess new tech and supply our associates with usage guidelines."

The spokeswoman said Walmart doesn't share specific information about these guidelines or how the company secures its network.

She did not address whether any restrictions applied to how workers use the bots on their personal computers and devices.

According to data security firm Cyberhaven, 4.9% of workers have pasted their companies' data, such as computer code, into ChatGPT since it debuted in November.

The first iteration of the technology was called ChatGPT 3.5. A newer version, called ChatGBT 4, was introduced on March 14.

Most Wall Street banks have reportedly restricted employees' use of ChatGPT. These include JPMorgan Chase, Goldman Sachs, Bank of America, Citigroup Inc., Deutsche Bank and Wells Fargo.

Amazon, Verizon and Accenture also have asked workers not to use the bot.

ChatGPT is a "generative artificial intelligence" bot created by artificial intelligence research lab OpenAI, co-founded by Elon Musk in 2015. It's one of several that create different types of content that users can apply for either constructive or malicious purposes.

Big companies like Microsoft -- also in collaboration with OpenAI, Google and Meta, as well as many startups -- are working on their own language-based generative AI bots. Other bots are image generators, such as Dall-E, Stable Diffusion and Midjourney, that create photos, videos and even music.

Walmart's usage guidelines apply to all generative AI bots, the company's spokeswoman said.

Most of these bots gather their data by scouring the internet and online databases. They pick up both correct and incorrect information but can't distinguish between the two, so their content often contains errors. Most contain filters to block false information, but those aren't perfect.

As an example of how ChatGPT works, the question "What are generative artificial intelligence bots?" typed into the chat box returned three paragraphs, quoted here in part:

"Generative artificial intelligence (AI) bots are AI programs that can generate new content, such as text, images, or even videos, that mimics human-like creativity. These bots are designed using machine learning algorithms that enable them to learn patterns and structures from existing data and generate new content based on this learning."

"Generative AI bots are used in various applications, including creating chatbots, generating realistic images, and even writing news articles. However, as these bots can produce content that is often indistinguishable from human-generated content, it raises concerns about the potential misuse of such technology."

At least one media website already makes at least limited use of these bots. CNET, which publishes content on technology and consumer electronics, runs an editor's note at the bottom of some of its articles saying, "CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors."

Alan Yang, assistant professor of information systems at the University of Nevada, Reno, College of Business, said that "Companies that want to avoid having their data shared will want to keep information away from web sources.

"As the technology develops and draws more investment," he said, "privacy and ethics advocates will likely put pressure on OpenAI to continue divulging the scraping sources used in new iterations of the language model."

ChatGPT -- which stands for generative pre-trained performer -- created such widespread excitement because it can do tasks that artificial intelligence had previously been unable to perform, Yang said.

"Prior to ChatGPT, generative AI made a splash when it was used for the creation of AI art," Yang said. "ChatGPT has had wider reception because of its broader uses beyond art generation."

As to how ChatGPT can be used constructively in a business setting, Yang said he recommends using it as a support tool for fairly simple tasks, such as editing a document, checking code or preparing an outline.

"Attempting to perform more complex tasks such as data analysis or reporting on current events using ChatGPT 3.5 will have mixed results," he said.

That's because ChatGPT's knowledge comes from information "scraped" from the web between 2020 and 2021, Yang said.

And he advises workers and students against "copying ChatGPT-generated text in its entirety and submitting a report or email that will be read by a human supervisor."

"Even individuals that are receptive to the use of ChatGPT as a support tool will likely get annoyed if they receive an AI-generated response when they ask a human worker to complete a task," Yang said.


Upcoming Events