Chapter 15: Business Technology Advancements
What We’ll Cover >>>
- The Future in Business Technology
- Home Automation
- Workplace Automation
- Artificial Intelligence
- Hardware
- Software
- Access to Technology
The Future in Business Technology
This textbook has covered a lot about computing, productivity, and the Internet. It might be hard to recognize what that all has to do with business technology, specifically.
As covered at the beginning of this book, Business technology refers to the technologies and systems that help employees accomplish activities. It is related to use in school and the workplace, and includes computers and their peripherals, software and utility programs, and systems/use of the Internet. Business technology is about all the uses that make finding and getting a job, getting the work done, and providing results for stakeholders like customers, vendors, shippers, colleagues, and students.
All that this book has covered has been supporting information and reference to issues and challenges that support business technology. Often the technology discussed wasn’t specific to the workplace (sometimes school, sometimes behavior, sometimes general technology). However, in one way or another the business/work market is impacted, because we as students, prospective employees, and actual workers are impacted. Essentially, there are aspects of technology / the digital world that can’t really be separated from the core definition of “business technology”. It is all inter-related.
This chapter will discuss current and possibly pending advancements in technology, which in one way or another is inter-related to the business world.
Home Automation
Home automation impacts the business world because it is a series of products and services, because information privacy and security are part of the equation, and because more people work in home offices that are affected. Employees in this business market are responsible for the development, assembly, and management of the products, as well as the technical and customer-service support.

MedAttrib: technofaq.org CC BY-NC-SA 4.0. Image of smarthome.
Initially, most people’s home networks had just a few connected devices, like a computer and a printer. This has changed in recent years. An important need is to keep firmware updated, and consider a 2nd Wi-Fi network so that computers/work assets don’t mix with the smart home.
Smart Devices
Home automation allows many devices to connect to our home network. Typically, these “smart” devices work just like their regular counterparts, like light switches, and they also have programmable smart features. They allow for tracking usage of power/energy, and by people who have access. They can help automate some chores. Programming can happen on the device, by smartphone, and by voice.
- Smart Appliances. Get reminders if something is finished with a process, like the washing machine or oven. Set the coffee to brew before you wake up. A smart fridge can let you know food is spoiling, keep a shopping list and deliver it to your smartphone, and narrate recipes. Smart ovens can preheat and turn off with the recipe instructions.
- Smart Lighting. Smart wall light switches work like a normal light switch, but can be turned on or off using your computing device. They can be programmed to activate the lights at certain times. Smart light bulbs are similar, but you leave the old light switch always turned on, and turn the smart bulb on and off through the app or a secondary switch.
- Smart Locks. A keypad can be programmed with a specific code, including ones for different people to track use. The door can also be locked and unlocked from a smartphone app.
- Smart Smoke Alarms. A low battery can prompt a reminder message before the alarm goes off.
- Smart Speakers (with microphones). Don’t want to use your phone or your watch to control your home automation, then give it a voice command instead. “Answer the phone” when it rings, or just to “Set the timer for 3 minutes” are just two of many possibilities.
- Smart Thermostats. This can be programmed for temperature ranges at different times.
- Smart TVs. Video streaming services such as Netflix provide on demand access to movies, documentaries, etc. A smart TV will have apps to access common streaming services, and also allow you to connect your phone or computer to the TV so you can view your photos or computer screen on the larger TV screen.
- Video Doorbells. The doorbell can notify your smartphone, and display the visitor. You can talk with them, and consider the tool as a security system that takes a picture of whoever comes in range of the doorbell’s camera.
Workplace Automation
Workplace automation uses technology systems to do repeatable / predictable workflows without requiring manual intervention. The technology systems are a blend of artificial intelligence (AI), machine learning algorithms, and robotics. Programmed effectively and correctly, automation can improve efficiency, reduce repetitive motion injuries, minimize human error, and support customer interactions that require standard information.
We already experience many of these uses when visiting websites and needing customer support. The key to making this workable is it put the customers first. The digital interfaces have to be well programmed to really be responsive and be able to resolve most problems, so that a customer can get rapid assistance and ask for a live person only when needed.
Uses of workplace automation
- Asset tracking: Digital tags can also be applied to warehoused products, to automate tracing, reporting, and ordering.
- Automated Data Collection: This cab be configured to generate data entry input forms for a customer to use online, and/or for a phone customer’s call responses to be translated into a form and added to a database. Once in the database, the data can be sorted, analyzed, scheduled, and acted upon, with tracking and documentation.
- Customer Relationship Management: Some or most of customer management – depending on the business – can benefit from have automated problem-solving options. Customers can experience a CRM digital voice on the phone or a digital assistant on a website chat. They can receive bot-scheduled return calls to avoid waiting on hold. The live person can pull up database information to verify the customer’s identity and follow up on the problem resolution.
- Digital tracking: Using digital tags and electronic shelf label can allow for the automatic syncing and updating of pricing, and update in-store inventory’s stocking needs.
- Human resources: Recruiting can be done with attaching a database-reading application form onto job listings. Applicant data goes into the database and is there for tracking through the process: initial phone calls, comments, interview, and hiring process. Trainings can be assigned for onboarding, and some trainings can be automated digital scenario-based programs. Scoring, plus employee accomplishments and annual reviews go into the database.
- Industrial automation: Heavy, body-stressing, small-space, and detail-oriented work on an assembly floor can be assisted by robotic counterparts. An example would be Boeing (because the author temped at the Everett plant and daily watched the assembly line). Boeing now uses an automated drilling system for plane exteriors, and a robotic painter for the 777 wings. Other examples in industry could be machining, welding, and other metal fabrication work.
- In-person support: An office that has rare customer visits and does a lot of online work might not need a fully staffed front desk reception all day long, which can allow job sharing or only 3-4 hour in-person reception.
- Managing financials: A lot of a company’s financial work can be automated. Online forms can let employees input expenses that need to be reimbursed, timesheet info, vacation hours, etc. Payroll just needs access to timesheet data to auto-process salaries and deductions then and deposits.
- Marketing automation: Much of the acquisition of customer data is about being able to personalize marketing plans and advertising techniques. Bulk ads and print ad campaigns aren’t cost-effective and tend to be one-size fits all. For lower cost, through automation and digital delivery, prospective and repeat customers can be targeted with personalized ads, deals, discounts, and preferences. This is already common on Amazon and other sites.

MedAttrib: technofaq.org CC BY-NC-SA 4.0. Relationships management.
Artificial Intelligence

MedAttrib: L.J. Bothell image of “Artificial Intelligence in College” generated from DeepAI generator. (yes, I planned on the irony here, just to demonstrate. :0) )
On September 26, 2023, your author (Mesozoic age), who uses MS Windows 11 on a laptop, received another Microsoft update. This update included a rollout of MS Copilot, (Preview) which one would like to think is a new version of what MS Cortana and the old MS Clippy paperclip bot used to be. However, it has proved to be much more, and is an “AI companion” that is fully integrated into Microsoft products/apps.
In the time since this textbook was written (Spring 2023) the phrase and current reality of AI has exploded into common social and professional consciousness. While this book isn’t able to be repurposed to focus on all things AI, part of our responsibility / education / jobs regarding business technology expertise is moving toward a need for greater understanding of it. Therefore, this section has been expanded a bit – and yes, still written entirely by the author – with the author having been self-educating about AI through reading, seminars, workshops, trainings and even trying it out a bit.
The Basics of AI
We have all been inundated with articles, books, discussions, and social media about AI, plus had it handed to us as a routine addition to our daily lives – like with Google AI summaries, more AI chats and phone responses, and many news stories. Right now Microsoft products, which I use and teach about, have added “Copilot” as the AI link/tool to their platform in order to help users and businesses harness Microsoft integrated AI in daily tasks and processes.
Let’s try to get some kind of working idea about what AI currently is (and is currently moving toward). The acronym AI stands for artificial intelligence, but what is meant by that has different levels of human comprehension, expectation, and anticipated practice. A generalized description of Artificial intelligence is that it is a field of science focused on building computers/machines that reason, learn, and function in ways that normally require human intelligence; or which involves data at a scale that far surpasses what humans can analyze.” Consider the following:
- AI as Assistive Information: This has been the use of computers to do work – within a given timeframe and set of resources – to automate and even replace some of the ‘grunt-work’ of humans. This now involves computers/machines being trained with vast amounts of information so they can access and consolidate in very short times more data and options across various systems than a team of humans can. Assistive information moves beyond what human research, information gathering, and analytics have been usually successfully allowing humans do in life and business. An example would be partnering with AI in medical research to much more quickly review, categorize, and pull together diverse and huge amounts of data to support solving a specific medical need, like the creation of a new vaccine in response to a specific illness.
- AI as Adaptive Intelligence: This includes computers generating new information and solutions that can repurpose, boost, and amplify past successes, failures, and limits. Computers are programmed with sophisticated algorithmic instructions to create the ability for them to learn from their own ‘experience’. Computers sort through vast amounts of data, search for and receive new information, and determine failures to not reproduce/options to no longer pursue – then start there for the next tasks. They generate whole new sets of data, consider multiple scenarios, and prompt options unbounded by human constraints to develop new approaches to common problems. An example would be transforming the actual medical researching process itself, challenging and generating alternatives to only a vaccine in order to account for genetic/environmental problems and policies, and creating multiple problem-solving solutions to the illness more effectively than humans currently can.
- AI as Active Integration: Generates new management of full life and work systems at both micro and macro levels to manage entire industrial, social, and human environment structures. This is the ability for computers to use the learned data, self-experience, generated new datasets and options, and continued prompt/programming input from humans to manage many integrated life experiences and needs of their ‘customers’. Think machines that run a whole household or whole product individualized marketing campaigns without humans needing to ask for anything – based on learning and predicting humans’ needs, schedules, assets, and psychology. Consider the medical environment situation above; active integration could lead to possibly replacement of significant parts or entire systems like the health/insurance process. How much could actively integrated tools manage the whole human hiring, payroll, work environment, complicated medical procedures, and medical-related decision-making of humans?
- AI as Artificial Intelligence. And, of course, we might ask what is the potential for possible autonomous and self-willed sentience of computers – like Data in Star Trek TNG or HAL in 2001 A Space Odyssey or the machines in The Matrix? Could this actually lead to the creation of a new species of intelligence and competition alongside humans?
The Environment of AI in the Workplace/Consumer Needs
AI/staff teamwork can help streamline workflows and build more efficient new processes; the result is work that’s not only faster but also of higher quality and better aligned with problem-solving and supporting business goals. How AI can/will be helpful:
- AI can access and leverage company-wide information and resources. It can analyze individual documents, projects, departments, and workflows. It can connect the dots across teams, projects, and relationships within the business, and could even connect with known info from relationships and resources from outside the company, like vendors. It can create and analyze data graphs to capture and visualize all the interconnected data about team workflows to help resolve issues, prevent new ones, and build better processes that take less time and cost. Consider how annual reviews, accreditation processes, budget and taxes, and choosing the most helpful vendor relationships could be impacted.
- AI can integrate with daily tools and expand/help develop new templates and process workflows. Working seamlessly within the tools that teams already use, generative AI can add more options in less time at less cost, and help workers add more creativity. For instance, a team using something like MS Copilot to access previous years of annual report work can improve all clerical/administrative and analysis/presentation of the current one, just by using existing Microsoft software tools that are already in place.
- AI can help create and refine custom workflows, with features that are already being designed and deployed by vendors and AI providers. Even more useful can be developing personalized datasets that can be tailored to an organization’s needs, data, goals, and projections. AI that adapts to a company’s unique workflows can greatly improve productivity at lower cost, better work with the employees and leadership values, and ensure alignment with business goals. Consider how a nonprofit could develop better low-cost methods and more efficient ways to fundraise, and then choose better resources and create more efficient use of the funds to scaffold services and amplify effects of the non-profit projects.
- AI can provide current, real-time data and insights that empowers teams to make informed decisions faster. This also can enhance team options for seamlessly communicating and collaborating, regardless of time zone, location, and department. Consider how a community college could better serve students with real-time data access to current employment data, state-wide enrollment and cost information, and the ability to employ teamwork between departments and with faculty/staff both at school and home workplaces.
- AI should have scalability as an organization grows and works more with it. Business/staff/resource demands grow and evolve, and AI workflows/solutions should be able to shift and scale to project and meet these demands to deliver reliable and long-term value.
The Environment of AI in Education
Our education system – in the USA and worldwide – is facing unprecedented changes due to fiscal/political factors as well as the tremendous amount of constantly evolving needs of the workplace. During this early timeframe of the public’s access to and experimentation with AI tools and processes, education itself is having to adapt quickly – which is very difficult and challenging for core pedagogy and instruction. In other words, what we have to be able to learn to understand, do, build-upon, and carry into our work and public lives – and the methods we need to practice to do so – is changing so rapidly with AI access that instructors/students/prospective employers think they want and need things that just can’t always adjust as fast as AI seems to be forcing.
However, at the same time, AI IS being demanded out there – in our workplaces, our services, and even by us as we expect perfection fast, cheap, and 24/27. We do have access to AI to make better processes and products, and to help better our teamwork, creativity, engagement, and hopefully our abundance in life. How do we resolve this – and students’ needs to become prepared for the evolving workplace and society needs that is being prompted by AI integration? This is where we – as educators, students, employers, service providers, and digital citizens can and will make some great decisions that can impact us all for the better. How can we use and benefit from the developing AI tools to learn better, more efficiently and tailored to our own strengths, collaborate to develop new knowledge, and hit the ground running in our evolving business technology world? And, how do we do this that still builds our basic human competencies, scaffolds our learning and information intake, and helps us ‘show our work’ so we ourselves remain our best educators and mentors?
Schools are struggling with this, because even the seemingly simplest changes in curriculum, services, knowledge flow, and student preparation has slow-moving costs, regulations, and protections to help make sure the school is doing its best job with minimal disruption to student access/learning/quality competencies. 2001’s USA “No Child Left Behind” (NCLB) created an education culture of teachers needing to help students beat a nationwide standard testing process that threatened teachers’ jobs if too many students failed a one-size-fits-all testing mandate. This helped create a generation of ‘teaching to the test’ and graduating classes of students who lacked a lot of basic life skill and critical-thinking proficiencies, an environment of inequity. This was replaced in 2015 with the ‘Every Student Succeeds Act’ (ESSA), which worked to re-equitize and support education for students. However, after 14 years of NCLB, a lot of students lacked proficiencies, instituting ESSA took time, and COVID hit the USA education system with a lot of problems starting in 2020. Now, starting mid 2023 on, AI impact on education is adding a lot of new complexity while potentially alienating the students/graduates from 2001 through COVID. Oh, my. . .
So, what do we do? What do students need in our education system of kindergarden-12th grade, community/vocational schools, and university-level colleges? What do educators need to help them teach and support their students? What do schools need in order to provide the tools, environment, and some kind of consistent standard of education for every student to be able to show life-skill proficiencies in when they graduate and decide to get further education opportunities? What is the workplace demanding, and our economy of prices, resource depletion, and products/services expansion require? How can AI help us all “get gud” (gaming term for becoming excellent) while relying upon and supporting human intelligence and accomplishment?
Benefits in Education
How to we develop, use, and require AI be a tool for us to become better-focused, supported, more knowledgeable, and rewarded humans? This will and has to start with adapting our education, which is a huge undertaking and a subject for other books. However, we can consider a few things that students do need:
- Chances to become better prepared for what the workplace is demanding now, which includes being able to comprehend, use, and boost their work efficiency with AI tools.
- Exposure to more effective AI-result-recognition skills, so as to discern between authentic & accurate work versus AI-generated work that may be unreliable, misleading, and exploitive.
- Preparation for designing and refining AI-prompts, which will allow doing more in shorter times and with much more usable results. This skillset would include common sense, logic, ethics, information literacy, critical thinking, comprehension of cause and effect, and a forward-thinking perspective.
- Practice in using AI as an inquiry support that can enhance and refine preparation work for critical-thinking writing, processes, and brainstorming. AI that simply generates potential results without active and savvy human interaction could otherwise take the human influence out of serving human needs.
- Experience integrating AI thoughtfully as both a research tool and editing assistant that can help put the engaging part of product/service/presentation creation more firmly in human hands.
- Access AI-integrated learning, skills-building, communication, information literacy resources, and massive data with built-in categorization/analysis, which can support both solo and team-oriented project efficiency and success.
- Personalize learning paths for individual needs, motivations, learning styles, accessibility, and timelines. Plus, there is 24/7 access to brainstorming, virtual tutoring, different types of immersive learning modalities, referenced primary information, reviews of peer work and even instant support that can help students break a learning logjam.
- Support for free/lower-cost academic resources that can reduce financial barriers. By offering no or low-cost research assistance and editing, AI – if used ethically and effectively – can help students reach previously paid-for-services.
And, for educators, AI – with the ability to assemble and analyze data – has the potential to better assess student starting points, experience, learning, progress, and areas to refocus content. It can also help educators learn early-on in a term the potential issues which are causing students to fall behind and lose interest. AI can help instructors develop personalized and team-oriented interactive learning environments and simulations so students can scaffold their learning with data, analysis, realistic practice scenarios, etc. Finally, AI can help educator automate many routine and time-eating activities like scheduling, admin workload, and reporting.
Challenges for Education
A common concern about generative AI – particularly in the doing of one’s work, includes loss of human fidelity, knowledge, and competence in favor of convenience and even laziness. This is a big concern – especially with younger students who are still learning discipline and follow-through, so this segment is kind of exploratory.
An example would be the ethics of humans learning / practicing skills and knowledge so that folks can actually work with it. Based on past examples: basic and scientific calculators added support to data calculation, and internet avenues added much more rapid and wide access to information and communications. Moving to more paperless workflows offered access for students and staff to better and more quickly and easily access more learning and research materials. Search engines helped identify and narrow-down large amounts of scattered information, while online connections helped make more diverse and accessible communications for learners and educators. What and how instructors taught and students learned adapted and even changed to meet the equally shifting needs of business and commerce. Yet, in all this, the specific underpinnings of core learning and scaffolding knowledge and skills remained fairly consistent although in recent years – with search engines and so much non-academic information seeming ‘specialized and accurate’ – there does seem to have been some decline in students’ reading, digesting, and follow-through of academic and skills-building content. All of this felt challenging and even somewhat revolutionary as it was happening.
What is happening now with AI and the human bandwagon IS actually revolutionary, and bears a lot of thinking and collaborating about. Within weeks/months of AI services’ deployment to the public in 2023, tech companies and businesses overall started projecting a whole new phase of employment demands and employee-package needs – needed ASAP. Most of these are not fully tested, accurate, or reliable, but it IS all the rage. Educators of deep-learning areas like medicine, law, engineering, economics, environments, and relationships (which in various ways affect most industries and workplaces) are facing the demands for AI for faster teaching and more AI-skills-enhanced graduates even though these tools are still problematic and inaccurate, and many of the ethical challenges seem questionable. Students, grads, and workers are supposed to now be instantly and completely prepared to step into an untested and untried commercial system of “AI must make everything cheaper, faster, better, and less human-troublesome”.
The ethics of becoming educated and prepared for life is facing far more challenges than ever before. Students and grads can feel they need to skip steps, sidestep intellectual property, cheat, and then expect that AI and public knowledge will pick up the slack. Consider students in high school and college. Many determine that at least some of the courses required for a diploma, certification, or degree are unneeded, useless and/or a waste of their time (i.e.. boring). They may already have cheated, had other students sit in on their tests for them or write their papers, and pled for a passing grade when not actually showing basic coursework competence. Now with AI tools, many students are spending as much time trying to use AI tools to game the system as they would just learning the subject. But, who cares, since they get their grade and the teachers “get paid”, right? Here’s the thing:
- Students who game the system with AI don’t know the subjects, skills, or experiences that they certify in, like the medical use for pharmaceuticals on patients in a nursing degree, or the engineering skills for participating in working on big buildings or systems.
- Students who claim AI’s responses as their own expertise fail in the jobs they get, putting other employees, the employers, and the customers at risk just because of what they did not actually learn and comprehend. Everyone else has to cover for them and fix mistakes.
- Students who graduate unable to actually use what they were supposed to learn are employees who cost the work market in:
- Slow and inefficient production with related higher ‘make-up’ costs;
- Low-quality and flimsy products and services;
- Defects and mistakes that cause failures, damage, injuries, and deaths; and
- Incompetence that contributes to big costs for the employer: expensive lawsuits, lost big contracts, fleeing customers, loss of revenue, and forfeited reputation, etc.
In other words, every time students decide to use AI to pretend they know something they are being trained in and to avoid actually learning it are actually unfit and potentially too dangerous to actually have and do many jobs. This has been a growing problem already – with search engines, the lapse of reading, and loads of questionable material out there, and now with AI, the practice of learning-avoidance is even easier to do and harder to detect. Think of things like buildings that collapse because of unproficient engineering, medical crisis from incompetent prescribing of medicines, property damage from inept inspections, poor quality products from lazy ‘skilled’ work, and unfit services that end up costing people’s health, money, and safety. All those characters we hate in movies, shows and books who seem to be incompetent and excuse makers – those are your AI ‘cheaters’.
The Evolving Future of AI
AI will disrupt existing processes and landmarks in life, which means that innovative inventions and evolved current technology will change the way society behaves, thinks, and interacts. It also will also change the way businesses or industries operate. Will these innovations be positive and focused on moving people, society, and economies in positive and equitable ways? Or, will they be ill-planned and focused on short-changing salaries, resources, options, economies, and human opportunities for growth and autonomy? AI is here, and we need to study it, work with it, and constantly evaluate and improve its benefits while minimizing and eliminating its problems. And, we are seeing the potential for both loss of significant parts of work process-related jobs, as well as the potential for new and more skilled jobs.
For instance, there will have to be a variety of AI-centric jobs. These would include the continued programming, evaluation, refinement, and collection of AI data. This means more:
- Server farms and related infrastructure/energy resources;
- Data storage. Organization, and retrieval methods;
- Integration between AI and business environments;
- Precise programming and tailoring of AI tools;
- People who can ‘engineer’ sophisticated prompts that receive accurate and actionable responses; and
- Human-behavior, experiences, and resulting consequences related to AI;
- Ethics and authenticity of AI practices – the machine learning sources, access to information, legality of data acquisition and use, regulation regarding AI influence and culpability, etc.
- Human resource-oriented work for hiring AI-prepared talent, training in AI literacy, and human-AI collaboration specialists.
- Levels and variations of security, both against attacks and for strengthening accuracy/reliability of AI-generated information, processes, and activities.
In short, we can already see and project several benefits and challenges that will come with integrating AI so much in education and the workplace, regarding products and services, and within human interaction.
Benefits of AI
Benefits we can experience and integrate into our lives will require us to continually develop, monitor and adjust all AI-related processes, and to have highly competent and ethical people in every stage. Consider:
- Faster processing of data;
- Brainstorming new approaches to learn, study and explore;
- The ability to speed, deepen, and integrate research;
- Automation of many more routine tasks;
- Management of dangerous tasks;
- Added precision to data results and analysis; and
- Developing additional ways to learn, reskill, and achieve.
Challenges with AI
Challenges we will face and may see integrated into our lives will come from human laziness, people and businesses taking shortcuts, lack of ethics in AI use, and delays/confusion in human-protecting legislation related to AI-related practices and affected products. Consider:
- Failure of missing of core skills in the workplace if AI cheats allow students to get degreed without skills;
- Dummying down of knowledge, culture, and authenticity due to cheaper and one-size-fits-all development of products, services, and art;
- Problems with accuracy and ability to course-correct due to reliance on speed and thrift;
- Absence of privacy when all data and details can be accessed and categorized;
- Loss of property and intellectual rights when even machine learning avoids full crediting and compensation for original work used for the machine learning;
- Bad actors who create exploits, barriers, crises, and loss of privacy/security at previously unprecedented levels;
- Job insecurity, which will stress those without adequate access to technology and continuous learning; and
- The overlooking/ignoring of human rights in favor of faster and cheaper processes.
Basically all the bad, lazy, and messed-up ways we already practice goes on steroids and in ways we can’t track, hold anyone responsible for, or even notice/stop when it is happening. Write a bad algorithm and leave it unmonitored, everyone pays in ways that devastate lives and finances.
More Challenges: Power
In addition, the use of AI requires a lot of electricity for all the computational power used. This includes the megawatts needed for the high-performance computing infrastructure used during training of AI which is ongoing, iterative, and 24/7 – which has thousands of graphics processing units (GPUs) running continuously for months. There is also the global growth of data centers which have massive fossil fuel electricity footprints, which is competing with consumer and general business use in the existing power grids. The processing needed to interpret and do the work to respond to every single query/prompt is also significant, with the most basic prompts using several more times the amount of processing power than the basic search engine requests do. This is expanded by the hundreds of millions of daily requests with the use of searches, digital assistants, online shopping, and other consumer activities. This is even before you get to the refining of basic prompts and the kind of AI use that requires sorting through massive data for detailed research and business uses, which demands the ongowing growth of more complex models and larger datasets. Energy use challenges include:
Electricity: Fossil-fuel-based electricity (as well as other forms) is currently almost doubling the need for processing power each quarter of a year.
Higher water usage: Data centers’ advanced cooling systems in AI data centers causes excessive water consumption, and in areas with water scarcity this competes with human needs.
Emissions: AI energy use consumes fossil fuel-based electricity, which contributes greatly to greenhouse gas emissions.
E-waste: The components used in all this processing burn out quickly, become obsolete fast, and damages hardware – and need to be discarded which leads to a lot of non compostable waste.
Resource shortages: Manufacturing new components means a lot of extraction of rare earth minerals, which drains natural resources and contributes to environmental degradation.
The Practical Use of AI
How does AI actually work? Here are a few important AI processing concepts, which are all part of how it is used, grows, matures, and affects us.
- Information, task, image, etc., needed by a human.
- Human creates in human language a (hopefully) literate and detailed prompt that is (hopefully) neutral/unbiased and structured to shape the needed result’s format and context.
- Prompts should be concise to limit the number of tokens (pieces of text/data) that the AI needs to interpret correctly in order to generate accurate results.
- Longer prompts would be for tasks that need more context and complex reasoning to solve problems.
- Shorter specific prompts would be for basic search/generation. Likely more refinement tries.
- Parameters and prompt details need to be unbiased and information-literate in order to minimize AI hallucinations (inaccurate fabricated results) that offer unreliable info.
- Natural Language Processing enables computers to understand, interpret, and generate human language, whether spoken or written in prompts.
- Prompts should be concise to limit the number of tokens (pieces of text/data) that the AI needs to interpret correctly in order to generate accurate results.
- AI tool needs to be trained to successfully complete tasks.
- AI basic machine learning uses LLMs to train the AI agent on massive amounts of data. The goal is task-oriented Generative Artificial Intelligence AI.
- Generative AI is a type of AI that can create new content – such as text, images, or videos – based on patterns learned from existing data
- Deep learning occurs when multilayered neural networks simulate the complex decision-making power of the human brain. The goal is autonomous comprehensive and decision-making AGI.
- An Artificial General Intelligence (AGI) AI can understand, learn, and perform any intellectual task that a human can.
- AI basic machine learning uses LLMs to train the AI agent on massive amounts of data. The goal is task-oriented Generative Artificial Intelligence AI.
- The AI autonomously completes the prompted task.
- Uses/adjusts parameters to learn from the data, generalize to unseen data, and generate the results.
- May use diffusion models to create new data, such as for images.
- The AI algorithm should also use RAG (Retrieval-Augmented Generation) to retrieve current primary source information from expert systems/databases.
- A fine-tuned AI (agent) would be purposed for a specific range of use.
Using AI
AI can generate ideas, examples, analogies, tactics & strategies, gamification options, potential outcomes and ways to change them, entry points, explanations, and how-to’s – and many will suggest NONhuman methods. However, for this to happen at the assistive information and adaptive intelligence levels, AI needs requests for action to be taken. This is currently called prompting, which is essentially the request we give an AI like ChatGPT.
Prompting has over the past few years been as simple as asking a search engine like Google a question: what is Stephen King’s latest book about? Pineapple on pizza or not? Best house-cleaning companies within 5 miles of zip code 98101. In this kind of case, we are now seeing search engines provide not only website links, but also an AI-generated response of a paragraph or two of info – and even an offer for an “AI deep dive”. The author, by the way, has found this AI summarization helpful in trying to get useful information as she learns from MMORG/sandbox game wikis and players’ experiences in response to simple question like “Where is the starting location of UmptyUmp mission?”
However, the power and use of AI is much more robust, and this means that for useful give-and-take – or partnering with the tool AI can be – one needs to really think about the info they need and craft well-formed questions/prompts to get this info. This is called Prompt Engineering – which, per Google’s 9/25/25 quick response to my query, is “the practice of carefully designing, structuring, and refining text-based instructions, or “prompts,” to guide generative AI models. . . toward producing accurate, relevant, and desired outputs.” (See what I did here?). There are now whole articles and even books being written about prompt engineering, and the following list is an example of reasons for and types of basic prompts:
- Craft a keyword targeted version of my resume for automated keyword search processing.
- Assess and recommend paid writing opportunities in the (area of expertise).
- Search for articles on a targeted subject/specified aim.
- Gather reference information on a targeted subject/specified aim.
- Assess my collected timesheets for training details for annual review.
- Find end-of-life paperwork solutions and package them up for me.
- Based on (example/content info), give me # of (product/item/subject/problem-solving) ideas.
- Give me # variations on (topic). Tell me more about these ideas. How could each lead to (desired outcome)?
- Give me # of varied and accurate examples of (topic) that would make sense to (audience/consumer)?
- Give me # of original and creative uses for (object/product/process).
- Make sure that each article/reference actually exists by verifying that a web search returns a citation with a DOI. Include the entire citation in MLA format and the DOI in your final list. Eliminate any suggestions that do not comply with this.
- Offer examples of DEI in the workplace that do not focus on gender or racial parity and which do consider real-life concerns of people who oppose DEI. Look for the least harmful and also for the ones that really brainstorm workable ideas that take busy people into account. Choose the three best responses that support moral/ethical awareness without sounding condescending, reactive, preachy, condemning, or obnoxious.
- Write in the style of ____. Use his/her existing work as a model. Let me know if you need anything else from me before you begin.
- Before you begin, ask me what other information you might need to fulfil this task.
- Don’t do anything yet. First ask me if any part of what I am asking you to do is confusing.
- Please take a deep breath and work on this problem step-by-step.
Consider how the prompts go from very basic to adding in considerations to help the AI better offer a workable response/solution. Notice how the early prompts might need you to include something for the AI to work with, like a copy of your resume or timesheets. Then some prompts ask the AI to get and consider info and provide some conclusions (brainstorming) for you. And also, see how some prompts – which you might append to your initial prompt or ask later to ‘refine’ your prompt – treat the AI like a collaborative partner you would the ‘talking this through’ with.
Conclusion
Totally unknown at this point. You didn’t expect that, huh!
One big thing to keep in mind: human concerns should be reviewed, managed, and handled by humans, or else we are abdicating our own responsibilities to ourselves, each other, and all of society.
We have to be able to verify and double-check our work for validity and sustainability, and to do that we need to know and understand the basics, show our work, and stand by it. If we give it over to machines, we are walking away from our own competence and capability to think for ourselves and earn our own freedom. We become slaves to the machine/algorithm/whoever owns the machine.
At the same time, AI – like the printing press, mass communication, calculators, computers, the Internet, and other disruptive technologies – offers opportunities for enhancing and growing solutions to existing problems and new ways of preventing/future problems. What ways can we advance – safely, ethically, and with greater speed/reliability – important needs like medicine, health, our environment and resource solutions, equity, justice, independence? What ways can we leverage AI to help us brainstorm options to and solutions for our biggest challenges in environment/climate, population health, depletion of resources, inequities in our economies and representation, education shortcomings, etc.? How do we, as digital citizens in our tech-infused world, make our needs, voices, skills, and solutions heard and possible?
Hardware
How is computing hardware progressing? The discussion in Chapter 2 covered the basics of how things work; what’s coming up?
We’ve been accustomed to the Moore’s Law that every 2 years the processing power we can pack into a square inch doubles. That is how we moved from room-sized supercomputers to small and slim smartphones that can do almost anything.
Now, the research and development process is moving away from silicon-based transistors made from materials like graphene. Deep-learning software is enhancing programming and responsiveness. Noises are being made about quantum computing: moving from standard bits to quantum bits (qubits) , which can surpass standard binary 0 and 1. My brain can’t handle it, but the results could allow a quantum computer to find solutions in fewer steps than a standard computer, which would speed up processing. The quantum solution wouldn’t necessarily replace existing computer technology, but supplement is with an additional specialized chip.
A survey of Google results for various “future computing” keyword phrases suggests:
- Battery improvements: R&D is being done to create longer-lasting batteries that don’t need as much recharging, and using materials with lower environmental impact.
- Chip advancements: To develop more efficient and cost-effective chips, the industry is working toward imprinting learning algorithms on chip architecture as another example of AI.
- Computing input/Holographic tech: Touchscreen capacity is being researched to develop predictive touch, which would rely on machine learning to predict user actions. Holographic virtual tech is working to create line-of-sight 3D images the need for glasses / interfaces. This could accelerate the possibility of holographic screens and allow for air-sweeping motion.
- Distributed computing: Multiple computers work by sharing their computing power. A network would behave as a single computer that provides large-scale resources to deal with complex problem solving.
- Energy costs: Technology – the development of computing devices, and the management/operating of vast server data centers (power, cooling), will have to be dealt with. Greening data centers to get to net zero emissions and 100% efficiency is underway; however, significant progress will need to be made to repurpose generated heat for the grid and for much more energy efficient equipment.
- GPU dominance: Graphical Processing Units (GPU) co-work with the Central Processing Unit (CPU) on computers. The GPU’s purpose is to enhance computing with parallel processing to handle a lot of workloads at the same time. Big graphics, CAD, and gaming have been the beneficiaries; GPUs are also being tasked with training artificial intelligence (AI) and deep learning models. GPU performance will have to evolve rapidly, and in time they may supplant the CPU.
- Neuromorphic technology: Elements of a computer hardware and software elements are modeled after systems in the human brain & nervous system. This could allow devices to search for new info, learn, retain data, and make deductions.
- Scalability: A goal of computing is to increase the scalability of processing, harness the resources of multiple machines, and make cloud environments “agnostic” – independent of being designed for specific computing architecture and instead being able to serve any device. This in turn may cause a shift in hardware acquisition itself. Currently we have software-as-a-service (SaaS) with subscription software like MS Office. Computing hardware may undergo a similar path, with organizations moving away from requiring large expenditures on equipment and instead leasing hardware-as-a-service (HaaS).
Software
How is computing software progressing? A survey of Google results for various “future computing software” keyword phrases suggests:
- Application programming interfaces (API): An API is a software interface that helps developers link cloud computing services which make data/computing multifunctional for numerous programs. This allows data from isolated locations to be pulled and harnessed.
- Augmented reality: This will continue to be developed to reach personalized immersive experiences with tethered and standalone devices. Immersive experiences will need a huge push for 3D assets, and generative AI will be able to accelerate this.
- Blockchain-oriented software: Through a blockchain database process, data in systems is replicated, closed in a set called a block, and decentralized. Blocks are lined in a chain (blockchain) which, with transaction recording and public-key cryptography, ensures data security. Data can be viewed but not modified or hacked.
- Continuously deployed software updates: Code changes to an application are automatically tested and released automatically into the production environment. Version control, code review, and configuration management helps bypass the need for quality-assurance testing and human approval.
- Cybersecurity applications: With expectations of growth in cybercrime, like ransomware, AI /machine learning is being applied to security automation software. Software will have to become multifunctional to accommodate cloud security, IoT devices’ security, and the unique challenges of blockchain coding.
- E-commerce platforms: A content management system (CMS) and commerce engine that webstores use to manage products, purchases and customer relationships. Future improvements will need to blend user-friendly web presences with support for IoT, voice, and augmented reality aspects of e-commerce stores.
- Low code development: Low-code allows the creation of coded applications without needing developer skills, because a visual UI allows drag & drop pre-made code blocks. IT support will still need to supervise and test the results, concerns include security and ability for the apps to integrate with other business functions.
- Microservices: The process of monolithic architecture, in which application processes are grouped and handled as a single service, is having to be updated because grouped code needing complete app changes is not agile. Microservices architecture modules are built as independent services that use API communication. Modules can be built, managed, and changed independently of the others, can be scaled easily, and be reused in other projects.
- Multi-model databases: A database management system that organizes many NoSQL data models using a single backend, a unified query language, and API. There is a growing trend toward databases offering many models and supporting several use cases. Different than relational databases because different data models from diverse databases can be queried and combined with a specific query language.
- New computing languages: New programming languages are being developed to solve problems like speed optimization, scalability, and a need for user-friendly learning curves. Pluses in a language include being “memory safe,” or able to translate into and secure JavaScript syntax, or being hybrid like F#, or having early code-error detection.
- Online marketing: Like other software development plans, online marketing platforms are increasing reliance on augmented reality, AI, chatbots, and other digital tech. Omni-channel experiences are being promoted in order to improve customer retention; a customer should be able to expect the same experience in a store, online marketplace, social media, or by phone.
- Progressive Web Applications (PWA): PWAs work are coded with HTML, CSS, and JS like websites without a browser interface or a need to download. They are platform agnostic for mobile, tablet, and desktop computing. Examples include retail storefronts, social magazines, game apps, etc.
- Sharing Economy: Collaborative consumption platforms connect users with suppliers in real-time with GPS, data analytics, and AI. Focus is on personalized experiences, security, and the use – not ownership – of products and resources. Uses include crowdfunding, lodging, transportation, co-work spaces, and goods recirculation.
Access to Technology
A constant challenge is access to computing tools, the Internet, and efficient connections that provides security, speed, and no interruptions of service.
Part of the issue, as previously discussed, is a limitation of infrastructure in various regions in the U.S. and various countries around the world. Another issue is cost, for subscriptions to broadband service and for equipment that can do the work needed in education and workplace contexts. Another issue – which is outside the scope of this book – is standardization of services, as was recognized in education during Covid. Different workplaces – even branches of the same company, might have a lack of standardization of employee resources and management platforms.
Connectivity
5G: Now being rolled out and promoted is 5G, another generation of mobile networks. 5G is expected to have lower latency, which should effectively support AI, IoT, and augmented reality. It promises faster download speeds than the current 4G, which could improve video conferencing and automated tasks than at present.
Interconnectivity: Distributed IT infrastructure of many companies will be using hybrid-cloud or multi-cloud platform. Data and processing will be managed in the cloud although may be accessible to devices faster. More web platforms will be multifunctional and reliant on this interconnectivity so that companies can keep costs low, productivity high, transactions and data highly secure, and customer relations personalized.
Digital accessibility: Everyone, including persons with divergent needs, needs full access to digital content. Accessibility standards processed by usability engineers need to be personalized and applied to computer interfaces. The goal needs to be Universal Design, which would remove most or all barriers altogether, rather than just behaving as assistive technology. This goal still seems distant, although AI and robotics in web platform usability could offer significant advancement, especially since the continuing increase in technical layers can be off-putting to users with divergent capabilities.
Some emerging accessibility benefits include:
- Improved voice-to-text functions.
- Connecting with a virtual assistant.
- Smart home functionality.
- Assistive technology tool Morphic that personalizes the computer to a user’s needs.
- Possible public access bots/interface add-on that could assist a variety of accessibilities.