



What is computer science (informatics)?
The word informatics is a combination of the words "information" and "mathematics" or "automatic".
It can therefore be described as a science that deals with the automatic processing of information, which mainly takes place on digital computers.
Computer science is closely linked to areas such as logic, physics and mathematics.
It established itself as an independent science just 60 years ago.
What is information?
Information is a general property of nature. It can be recorded, stored, processed and passed on. Information is permanently bound to a material carrier.
Information can structure a system, control its behavior, or control other information about a system (e.g. a computer program).
Information can be duplicated at will.
Information is transmitted with characters or signals.
When a recipient receives a message, we also speak of information.
Important figures of informatics
● Charles Babbage (Mechanical calculator)
● Gottfried Wilhelm Leibniz (Principles of binary computers based on bits)
● George Boole (Boolean algebra: base of computer science)
● Alexander Graham Bell (Electric information transmission)
● Ada Lovelace (First programming language)
● Nicola Tesla (Inventor of the AND logic gate circuit)
● John von Neumann (Inventor of a computer architecture still used today)
● Alan Turing (Core concepts of theoretical computer science - the Turing machine, the Turing Bombe, pioneer of AI)
● Konrad Zuse (First binary programmable computer)
● Claude Shannon (Founder of the information theory)
● Edgar Codd (Relational databases)
● Grace Hopper (First compiler, COBOL)
● Ole-Johan Dahl (Simula)
● Gary Kildall (CP/M operating system, BIOS)
● Tony Hoare (Quicksort algorithm, influenced Ada and Go)
● Douglas Carl Engelbart (Computer mouse, HCI)
● Edsger Wybe Dijkstra (Structured programming)
● Margaret Hamilton (Software for the Apollo 11)
● Marc Andreessen (Mosaic, one of the first web browsers)
● Alan Kay (Contributions to OOP and XML)
● Joseph Weizenbaum (ELIZA)
● Leonard Kleinrock (Theory of computer networks and the Internet)
● Tim Berners-Lee (Founder of the WWW, inventor of HTML)
● Richard Stallman (GNU)
● Bill Gates (PC-pioneer, MS-DOS, BASIC, founder of Microsoft)
● Paul Allen (Co-founder of Microsoft)
● Steve Jobs (Co-founder of Apple, founder of NeXT and Pixar, driving force behind ground-breaking hardware and software-products, pioneer in bringing design to computing hardware)
● Steve Wozniak (Co-Founder of Apple, technical backup of Jobs product-ideas)
● Larry Page & Sergev Brin (Founders of Google, new prioritization algorithm for search engines that became the simple gateway to the Internet)
● Mark Zuckerberg (Founder of Facebook/Meta, breakthrough of social networking)
● Kenneth Lane Thomson (Co-creator of Unix, B language and the first Shell)
● Linus Torvalds (Inventor of Linux and Git)
● Anders Hejlsberg (Lead architect of C#, Turbo Pascal)
● Niklaus Wirth (Pascal)
● Kernighan & Ritchie (C language, language of choice for many operating systems: Windows, Unix & Linux)
● Bjarne Stroustrup (C++: the first language realizing the fundamental principle of OOP, that will make large software systems possible)
● Larry Wall (Perl)
● James Gosling (Java)
● Guido Van Rossum (Python)
● Charles Goldfarb (GML, which XML is a subset)
● Bill Joy (BSD Unix, TCP/IP, vi text editor, C shell command line interpreter, Solaris)
● Vint Cerf & Bob Kahn (TCP/IP)
● Paul Baran (Packet-switched networks)
● Ray Tomlinson (First email system)
● Whitfield Diffie & Martin Hellman (Public-key cryptography)
● Ron Rivest & Adi Shamir & Leonard Adleman (RSA public key algorithm)
● John McCarthy (The term "artificial intelligence", LISP)
● Marvin Minsky (AI-pioneer)
● Tomas Mikolov (LLM)
● Ashish Vaswani (LLM)
What is a database?
A database is a structured collection of data stored in a computer system.
Databases are typically used to manage large amounts of data and to facilitate the retrieval of information.
There are 4 different database types: relational, hierarchical, object-oriented and network-oriented databases.
Well-known database systems are the Oracle database, Microsoft Access, Microsoft SQL Server, dBase and MySQL.
The most commonly used database query language is SQL.
What is the Internet?
The Internet is a worldwide, public network of computer networks that is organized in a heterogeneous, decentralized and hierarchical manner. The Internet is a uniform communication system between different computers and computer networks.
The Internet makes it possible to call up information at any time and almost anywhere.
In the 1950s several visionaries including Ted Nelson and Douglas Engelbart independently suggest computerizing the concept of cross-references, creating the clickable link we use on the Web. Nelson calls it a "hyperlink", and the computerized text "hypertext".
The precursor of the Internet was the Arpanet and the commercialization of the Internet started in 1989 (it was made available to the public for free by CERN - the European Organization for Nuclear Research - in 1993).
Today there are 2 billions websites worldwide and this corresponds to a size of approx. 40 ZetaBytes (40 sextilion Bytes).
Globally, the number of Internet users is 4.9 billion.
Internet services
- World Wide Web (WWW) - it helped the Internet to break through in 1993
- FTP (File Transfer Protocol)
- newsgroups
- Telnet
- telephony
- web television
- webradio
- discussion forums
- chat
Internet use
- Internet surfing: find websites on the WWW using web search engines. The most used web search eingines is Google. Other web search engines are Microsoft Bing and Yahoo!
- Web browsers are used to display websites in WWW. Examples of web browsers are Google Chrome, Microsoft Internet Explorer, Mozilla Firefox, Opera, Microsoft Edge and Apple Safari.
- Email (e.g. Google Gmail, AOL Mail, Microsoft Outlook, Thunderbird, WEB.DE)
- Downloading software and apps.
- Social media (e.g. Facebook, YouTube, WhatsApp)
- Video conferences (e.g. Microsoft Teams, Skype)
- Book a trip
- Book tickets
- Online Shopping
- Cloud Computing
- Internet of Things (IoT)
Further terms
- URL (Uniform Resource Locator) is the path to a specific file on a server. Typically, a URL denotes an Internet address and is accessed using web browsers. In its simplest description, a URL consists of 3 parts: protocol (https://, ftp://), domain or server name (www.domain.de) and file path (/directory/file.html). The file path can consist of several directory levels.
- An Internet domain is entered into a web browser to reach a web page. Each domain consists of 3 lev iiels, each separated by a period (www.domain.de). A domain is unique worldwide.
- The IP address (IP stands for Internet Protocol) is the unique address of every device on the Internet or in a local network.
- An Internet Service Provider (ISP) is a service provider that provides services (such as Internet connection) to Internet users.
- The programming languages HTML (Hyper Text Markup Language), CSS (Cascading Style Sheet), Java, JavaScript, TypeScript and PHP but also Python, Perl, Ruby, C#, C++ as well as the frameworks Angular, Spring, Vue.js and Django are usually used to create web applications.
- A homepage is the first page of a website's directory.
- A website can also be created without time-consuming programming using a content management system (e.g. Webador, Wix, Jimdo, Jonos, WordPress).
- Nowadays, devices (e.g. PCs) are connected to the Internet via a router via WLAN (Wireless Local Area Network) - which works via a radio link.
What is social media?
Social media is digital media that allows users to connect via the internet. The largest social media are Facebook (almost 3 billion users), YouTube (approx. 2.5 billion users), WhatsApp (2 billion users), Instagram, Wechat, TikTok, Pinterest and Twitter.
Professional networks where members primarily manage their professional contacts and receive job offers are LinkedIn (850 million users in 200 countries - the biggest in the world) and Xing in German-speaking countries (approx. 20 million users).
A free internet-based instant messaging service that allows you to make video and voice calls is Skype (launched in Luxembourg in 2003), another software for video conferencing is Microsoft Teams.
Well-known online job and metajob search engines in Germany are Talent.com, Jobvector, Stepstone, Indeed, Monster, jobrapido, JobRobot, jobware, Kimeta, Stellenanzeige.de, jobworld and MetaJob.
A comprehensive, free, open-contribution online encyclopedia is Wikipedia (started in 2001 by Jimmy Wales, available in 315 languages).
What is cryptology?
The encryption of a message is cryptography (from the Greek kryptos for hidden, graphein for write). Deciphering an encrypted message is cryptanalysis. Both together - i.e. encryption and decryption - are called cryptology. So cryptology is the encryption of messages or data for the purpose of secrecy.
Three epochs can be identified in cryptography: in the first, encryption was done by hand or with mechanical disks, in the second (around 1920 to 1970) special machines were used and in the third (since 1970) computers played a central role.
Milestones in crypto history
- The Skytale (an encryption staff)
- The Caesar cipher (based on monoalphabetic substitution was and was used by Julius Caesar over 2000 years ago to exchange secret messages)
- The cipher disk (one of the first devices for encryption)
- Al-Kindi (He was an Arab philosopher, scientist, mathematician, doctor and musician. Al-Kindi is considered one of the pioneers in the field of cryptanalysis. He was also the author of the first treatise on cryptanalysis. In it he showed how the monoalphabetic substitution could be broken by the method of frequency analysis)
- the secret writing of Mary Stuart (16th century)
- The Vigenère cipher (a manual key method, a 16th-century monographic polyalphabetic substitution method)
- Leon Battista Alberti (is considered the first important cryptologist in Europe. Together with his successors, he developed polyalphabetic encryption, which was long considered unbreakable. He published his work in 1466/ 1467)
- ADFGX (substitution procedure followed by transposition - used in WWI) The One-Time-Pad (a one-time key procedure - the key has at least the same length as the message itself. The procedure is considered very secure)
- The One-Time-Pad (a one-time key procedure - the key has at least the same length as the message itself. The procedure is considered absolutely secure)
- William Friedman (his career as a code breaker began in the Riverbank Laboratories near Chicago. During the First World War he was then a cryptologist in the service of the US government. During his life he significantly coined the term cryptography, developed the statistical method Friedman-test and the coincidence index, solved around 1000 encryptions in his life and was the founder of the Signals Intelligence Service (SIS), a secret department of the US military dedicated to deciphering hostile news)
- Arthur Scherbius (He is the father of the Enigma cipher machine which he patented in 1918. The messages en-crypted by Enigma were considered undecipherable because the Enigma could theoretically generate a fifth of a quadrillion different keys)
- Alan Turing (British computer pioneer and cryptanalyst designed the Turing bomb that could be used to break the Enigma encryption)
In the computer era development of very secure encryption methods with the help of the computer are created and the problem of key distribution are solved. Examples of such encryption methods:
- Data Encryption Standard (DES) (Approved as an official standard for the US government in 1977 and has been widely used internationally since then.)
- Public Key Cryptography (In this process, the data is encrypted with a public key, which consists of a long, secret number code.)
- Pretty Good Privacy (Was developed by Phil Zimmermann and uses a variant of the public key system in which every user has a public key that can be shared at will. There is also a private key that is only known to the user.)
- Advanced Encryption Standard (AES) (which is a symmetric encryption algorithm that encrypts fixed blocks of data of 128 bits)
- Rivest-Shamir-Adleman (RSA) method (using a mathematical one-way function - multiplying 2 very large prime numbers - currently with about 300 decimal places - but the way back, i.e. the factoring of this result, would also take thousands of years for supercomputers last - therefore not crackable)
- Triple DES (Data Encryption Standard) (It is a multiple encryption based on DES encryption)
- Twofish (It is an encryption algorithm with a key length of 256 bits.)
Before computers were used, there was a struggle in cryptography between the side that was developing better and better encryption methods and the side that hoped to decode this cipher.
Mary Stuart's cipher was decrypted, with fatal consequences for the Queen. The Vigenère cipher was considered unbreakable at the time but was cracked by Charles Babbage in 1854. The encrypted messages from the German Enigma machine were also considered undecipherable, but since 1940 they were cracked by the codebreaker in Bletchley Park, England (albeit using an electromechanical machine, the Turing bomb). According to general opinion, this shortened the duration of World War II in Europe.
There are the encryption methods monoalphabetic substitution and polyalphabetic substitution.
Monoalphabetic substitution is a simple method of encrypting plaintext. Each character of the plain text is substituted by another character assigned to it according to a fixed encryption rule. The monoalphabetic substitution could be broken by the method of frequency analysis.
Polyalphabetic encryption is text encryption in which one letter or character is assigned another letter/character. In contrast to monoalphabetic substitution, several ciphertext alphabets are used for the characters of the plaintext.
There are two basic types of encryption: symmetric encryption (using just a single key) and asymmetric encryption (using two keys).
Cryptology can be found everywhere in everyday life: Internet surfing, online banking, paying with a credit card and withdrawing money from ATMs - all of this would be unthinkable without cryptography.
The latest approach is quantum cryptography, which is said to be absolutely unbreakable as an eavesdropper is detected because its measurement affects the data being sent.
Newer developments in information technology
Cloud computing
Cloud computing refers to the use of IT infrastructures and services that are not kept on site on local computers, but are rented as a service and accessed via a network (e.g. the Internet).
Examples: Microsoft OneDrive, Google Drive and Apple iCloud.
Edge computing
Edge computing is a form of data processing that occurs directly at or close to a specific data source, minimizing the need to process data in a remote data center.
Internet of Things (IoT)
IoT refers to a network of physical objects that are connected via the Internet so that they can exchange data with each other. Edge computing, which supports IoT, takes place closer to the point of data creation.
IoT devices, from smart sensors to wearable technologies, collect real-time data and transmit it over networks. This constant flow of information enables companies to monitor assets, optimize supply chains and improve product performance. IoT refers to a network of physical objects or "things" equipped with sensors, software and connectivity capabilities that enable them to collect and exchange data with other devices and systems over the Internet. These objects can be everyday objects, appliances, machines, vehicles, and even people, all of which have the ability to communicate and exchange information without direct human intervention.
Artificial Intelligence (AI)
"AI has become ubiquitous. It shows us the way when we drive, answers our questions, offers music recommendations, and supports a growing number of business processes in the workplace. In fact, AI is finding its way into so many aspects of our personal and professional lives that my company has started to call it everyday AI. I argue that it will soon be as ubiquitous - and necessary - as electricity."
(Florian Douetteau, CEO and co-founder, Dataiku)
"The 21st century is the first in which humans will need to make lifelong learning a practical reality. Doing so will mean configuring learning journeys to individuals' lifestyles and neurologies. AI enables us to do so, personalising learning and thereby potentially transforming our model of education for the public good."
(CEO of the RSA, Andy Haldane)
Artificial Intelligence (AI) attempts to mimic human intelligence, at least in part. Computer simulation of human learning and thought processes, as well as learning, planning and creativity, therefore also makes use of findings from neurology, linguistics, epistemology and psychology.
AI can independently solve tasks without being specifically programmed to do so.
AI gets better with each new task.
The use of AI is growing rapidly and it is clear that this technology has the potential to revolutionize many aspects of our lives. It is important to ensure that AI is used responsibly and ethically so that it benefits all of humanity.
AI has revolutionized the way we process and use data. Machine learning algorithms can analyze massive data sets, uncover hidden patterns, and make predictions with remarkable accuracy. In a business context, AI supports everything from personalized customer experiences to predictive maintenance, making operations more efficient and cost-effective.
Despite decades of research, however, AI development is still relatively in its infancy. For it to be used in sensitive areas such as automated driving or medicine, it must become more reliable and secure against tampering.
With LLMs and a technology called Diffusion AI, systems can now achieve or surpass other capabilities we often associate with intelligence, such as: responding accurately to speech input and creating new text content, writing original stories and poems, analyzing content for tone and emotion, and creating unique images and videos.
Current powerful processors and graphics cards in computers, smartphones, and tablets enable ordinary consumers to access AI programs.
More and more AI-generated content (text, images, audio) ends up on the Internet, but the quality of this will continue to decline (MAD disease). So countermeasures need to be taken to keep AI from degenerating further. There are efforts to make AI safer and more responsible, for example Microsoft, Anthropic, Google and OpenAI have launched the Frontier Model Forum, an industry association focused on ensuring safe and responsible development of frontier AI models.
It is estimated that AI could contribute between $10 and $15 trillion to the global economy by 2030. The global AI market is expected to have a compound annual growth rate of 36.2% from 2022.
Almost every government, large company, and organization in the world is working on an AI strategy.
When companies innovate with next-generation AI, three criteria ensure success: technological feasibility, economic viability, and societal value. Future AI technologies include multimodal systems, AI legal assistance, humanoid robots and research support.
Germany will invest in AI 2024, €500 million. Germany and Europe are to take a leading position in the world "Powered by AI" and achieve technological AI sovereignty. The German Federal Ministry of Education and Research (BMBF) considers the provision of AI tools, AI skills and AI infrastructure as part of basic services.
Despite its advances, however, AI is not as good as humans when it comes to sound reasoning or strategic and creative thinking.
Currently, AI can be categorized into three groups: narrow-, general-, and super-artificial intelligence.
AI Categories
- Artificial Narrow Intelligence (ANI) or Weak AI, are systems that are considered the least computationally potent AI. These systems include much of the contemporary machine learning and deep learning models with singular and narrowed functions such as Object Classification and Speech Recognition.
- Artificial General Intelligence (AGI) or Strong AI, are systems that would pass the Turing Test, with intelligence that produces outputs that are indistinguishable from that of an adult human being. As of publication, no publicly known AGI has been developed.
- Artificial Super Intelligence (ASI) or Superintelligence, is another form of AI yet to be developed that contains intelligence that can produce outputs that would vastly surpass the capacity of a human being.
History of the AI
The first true instance of AI is arguable, with some determining the mechanism used to produce "Ars generalis ultima" (The Ultimate General Art), published by Ramon Llull in 1308 was an artificial intelligence with the mechanical means to create new knowledge from logic and complex mechanical techniques.
In 1914, Spanish engineer Leonardo Torres y Quevedo demonstrated the first chess-playing machine in Paris, capable of receiving information about a chess game and playing a king and rook endgame against the king from any position without the aid of human intervention.
In 1950, Alan Turing published "Computing Machinery and Intelligence", introducing with it, the concept of the imitation game. This would later be known as the Turing Test, which tests a machine’s ability to display behavior and produce output that is indistinguishable from that of an adult person.
The years ranging from 1956-1974 are considered the renaissance period of artificial intelligence with developments such as semantic nets, allowing machines to solve algebra word problems, and search algorithms that allowed machines to approach solving problems much like solving a maze.
Following this period, the field of AI experienced lulls and bursts of progress (between the years 1974 and 2011) where computing power and the amount of available data would be considerable bottlenecks. This period ended around 2011 with the development of agents such as Deep Blue and AlphaGo, that were capable of matching and exceeding the best human board game players in the world.
There are use cases of AI that can reduce or replace some work done by humans. These include:
- Processing and summarizing text
- Filtering content created by humans
- Creating advanced chatbots
- Writing code
- Customer service and support
- Generate crafted documents (e.g., resumes, proposals, emails)
List of AI tools
- ChatGPT
- Bard
- DALL-E 2
- Rainbow AI
- GuestLab
- Homage
- Laxis
- Quillbot
- Midjourney
- Stable Diffusion
List of AI Software
- Google Cloud Machine Learning Engine (machine learning, trains model based on data, not free)
- Azure Machine Learning Studio (machine learning, browser-based, model is delivered as a web service, free)
- TensorFlow (machine learning, desktops, clusters, is for everyone from beginners to experts, free)
- H2O AI (Machine Learning, Distributed In-Memory, Programming, Languages: R & Python, AutoML functionality included, free)
- Cortana (Virtual Assistant, all operating systems, free) IBM Watson (Question Answering System, Linux, It learns a lot from small data, free)
- Salesforce Einstein (CRM system, cloud-based, no need to manage models and data preparation)
- Infosys Nia (machine learning, chatbot, operating systems: Windows, Mac and web-based, offers three components, namely data platform, knowledge platform and automation platform)
- Amazon Alexa (Virtual Assistant, operating systems: Fire OS, iOS and Android, IT can connect with devices such as camera, lights and entertainment systems, free with some Amazon devices or services)
- Google Assistant (Virtual assistant, operating systems: Android, iOS and KaiOS, supports two-way conversation, free)
- Dante AI (This chatbot is based on GPT-4, OpenAI's most advanced language model that can recognize over 100 different languages, making it inclusive and language-friendly, It is offered through Amazon Web Services)
Microsoft's Bing web browser uses AI and GPT-4.
DeepL is an AI interpreter for more and more languages.
Microsoft under Satya Nadella has launched the OpenAI project.
ChatGPT is a publicly available web-based chatbot and API interface created by OpenAI. It connects to a large language model (LLM) called GPT (Generative Pre-Trained Transformer). GPT includes some of the largest models ever created. After the model training, there is further fine-tuning to improve its generated response.
ChatGPT is part of a new group of LLMs that are now made available by a technology called transformers. Other notable and similar LLMs include BERT used by Google to understand user searches; DALL-E, which can generate images; and GPT-Neo, a promising open-source LLM.
ChatGPT is the packaging of a set of trained LLMs into a chatbot and developer friendly interface. This has created a system where there is a low barrier to entry and almost everyone can start using the LLM.
ChatGPT is trained by using vast quantities of data from the internet. However, LLMs are different from internet search engines in the following ways:
- No ongoing daily web crawlers and updates.
- Data can be out of date. ChatGPT 3 uses data from 2021.
- The size of the data trained in the model.
- ChatGPT uses a type of fine-tuning called Reinforcement Learning from Human Feedback (RLHF).
- ChatGPT-specific approaches including prompts, embeddings, and parameters such as temperature.
Breakthrough with Transformers
The breakthrough that allowed BERT and ChatGPT-based models is called transformers, introduced in a paper from 2017 entitled “Attention is all you need”. It was presented as a way to build machine translation but to have much faster parallel execution. It also introduced a new concept called Self Attention to further make sense of long language sequences. It creates connections between different parts of the sequence.
There are usage scenarios that may reduce or replace some work being performed by human beings. These include:
- Processing and summarizing text
- Filtering human-generated content
- Creating advanced chatbots
- Writing code
- Customer service and support
- Generating styled documents (i.e. resumes, proposals, emails)
Generative AI
GenAI is based on Large Language Models (LLMs). LLMs are trained on terabytes of textual data from the Internet. These models have the capability of delivering complex, high-level responses to human language prompts.
In the case of ChatGPT, one breakthrough was the discovery that with some additional training, the LLMs can be used to produce impressive text-based results.
GenAI is the connection of LLMs with technologies that, in addition to text results, can generate digital content such as images, video, music, or code.
There is exploration of going beyond digital content and using this technology for things as varied as the discovery of new molecules and 3D designs.
Things are changing quickly and the impact is only just starting to be predicted and felt. Most people are expecting changes in work and living lifestyles.
The models have a generative ability and were the start of Generative AI. ChatGPT demonstrated the ability to generate text based on natural language prompts.
Using the concept of foundation models, vast amount of pre-trained data can be leveraged with a small amount of additional tuning and prompting. Tuning adds labels into the data. Prompting bridges the gap between training and intention. While training the LLM is expensive, the usage referred to as inference is more cost effective.
AI is the matching or exceeding the intelligence of a human being. Things that are often associated with intelligence such as doing math, playing chess, or remembering vast amounts of data already can be done more quickly and better by computer.
With LLMs and a technology called diffusion AI, systems can now match or exceed other skills we often associate with intelligence such as:
- Accurately responding to language inputs and creating new text content.
- Writing original stories and poems.
- Analyzing content for tone and emotions.
- Creating unique images and video
Generative coding is the use of Generative AI (GenAI) to assist in software development. It was one of the first applications of Generative AI technology to be commercialized.
GenAI can generate new images from existing text prompts and images. Because of a random seed, the images generated are unique creations. GenAI uses diffusion models to create these new unique images.
GenAI allows for the creation of long-playing high-fidelity music from text descriptions and additional sound input conditions.
Generative AI can create blogs, write ad copy, create new content based on text input, and is capable of summarizing and changing the style and tone of text content.
In software engineering, AI is becoming increasingly common and can help developers speed up code generation, redesign, and documentation by 20-50% (McKinsey report).
Generative coding is the use of generative AI to support software development. It was one of the first applications of generative AI technology to be commercialized.
Generative AI, GPT-Engineer, ChatGPT, and large language models (LLMs) such as GitHub Copilot and other AI code generation tools are changing software development practices and productivity.
LISP has become a common language for programming AI.
Code Llama (Code Large Language Model Meta AI) is an AI system that generates and explains code in English. Similar to GitHub Copilot and Amazon CodeWhisperer, Code Llama helps developers code and debug in various languages such as Python, Java, and C++. However, there are concerns that AI coding tools can create security vulnerabilities or infringe on intellectual property. Developers are increasingly using continuous testing with generative AI.
Llama stands for large language model meta AI. It is an autoregressive large language model that uses an optimized transformer architecture. It is the second Foundational Model from Meta AI, released in 2023. Already the first version of Llama, released in late February 2023, was open source. Llama 2 is now not only completely open-source, but can also be used commercially. This opens up many new possibilities, as a wide variety of applications can be built on the Llama architecture.
Llama 2 is available in different sizes with 7, 13, 34 or 70 billion parameters. Training was done on 2 trillion tokens with over one million human annotations.
An artificial neural network (KNN, ANN) is a system of hardware and/or software that mimics the way neurons function in the human brain. KNNs, also known simply as neural networks, are a variety of deep learning technologies that fall under the umbrella of AI.
Artificial General Intelligence (AGI) enables computers to learn, act, and think flexibly like humans. Despite all the advances, today's AI systems remain "narrow," meaning they are good for certain tasks and are reprogrammed to do new things.
In cybersecurity, the cat-and-mouse game between hackers and defenders has taken a new turn with the advent of AI technology. Hackers, always looking for more efficient and effective ways to breach security, have harnessed the power of AI to crack passwords in unprecedented ways.
There are five strategies hackers are using lately:
- Intelligent brute force attacks
- Password prediction models
- AI-powered dictionary attacks
- Machine learning (ML)-based attacks
- Hybrid attacks
Today, cybersecurity is a multi-billion dollar industry with an ever-growing list of companies specializing in the development and implementation of cybersecurity solutions. From firewalls to encryption, these solutions are designed to protect both individuals and businesses from the ever-evolving threat of cybercrime.
One of the biggest trends in cybersecurity is the integration of artificial intelligence (AI) and machine learning (ML). These technologies have the potential to revolutionize our approach to cybersecurity by allowing us to analyze massive amounts of data and identify threats in real time.
ML algorithms, for example, can be used to detect anomalies in network traffic, identify potential attacks, and respond to them before they can cause significant damage. AI and ML can also be used to analyze big data and help identify patterns and trends that may indicate a cyberattack.
Natural Language Processing (NLP) is a branch of computer science - and specifically an area of AI. It relies primarily on us and our reality, and especially on our most important means of communication: language. It then combines it with statistics, machine and deep learning. Combining linguistics and computer science, NLP aims to decipher language structures and policies. In doing so, it aims to create models that can understand, break down, and separate important details from texts and language. In other words, it is about enabling computers to understand texts and spoken words in a similar way to humans.
Large Language Models (LLMs) are transforming digital marketing. Google, for example, recently announced the integration of LLMs into its search results in response to OpenAI/ChatGPT-based Bing Chat.
Contextual AI enables companies to realize the true potential of AI by embedding language models in their internal knowledge bases and data sources. Contextual Language Models (CLMs) are built on Google Cloud and create answers tailored to an organization's data and institutional knowledge. This leads to higher accuracy, better compliance, less hallucination, and the ability to trace answers back to source documents.
Could also be that behaviors emerge that are completely new to computer science. ETH researchers have developed a new programming language called LMQL (Language Model Query Language). By combining natural language and programming language, more targeted queries can be made to large language models, such as ChatGPT. The new programming language allows users to formulate security constraints to avoid undesirable results as much as possible.
Azure AI Speech is a speech service from Microsoft and provides speech recognition and speech synthesis capabilities with a speech resource. You can transcribe speech to text with high accuracy, convert text to natural-sounding speech, translate spoken audio, and use speaker recognition in conversations.
Genetic algorithm (GA) is the method of solving problems by using examples of natural processes, especially natural evolution, such as mutation, inheritance, selection, and interbreeding. GA are inspired by Darwin's theory of evolution. GA was invented by John Holland at the University of Michigan in the USA in the 1970s and has been further developed not only by him and his students, but also by his colleagues.
Applications of AI
Medicine
AI is used in prevention (e.g., smart wearables), screening (detecting abnormalities on an X-ray, detecting Parkinson's by analyzing retinal images), diagnostics, therapy (robot-assisted surgeries), follow-up care, and to provide new medicines. AI can also be used to create virtual assistants that can inform and help patients.
Finance and banking
AI is currently being used to detect fraud, control risk, and provide investment advice. For example, AI-powered algorithms can be used to examine financial data and identify trends that may indicate fraud. AI can also be used to develop trading techniques that support investor decision-making.
Education
AI is rapidly changing the education landscape: it has the potential to personalize learning, improve assessment, and automate tasks so teachers have more time to focus on creative and strategic work. AI is thus being used in personalized learning, Intelligent Content Creation, Intelligent Assessment Systems, Virtual Classrooms and Virtual Reality, Intelligent Student Support Systems, and Ethical Considerations and Challenges.
Logistics
Data entry and customer service are two examples of tasks automated using AI. AI can be used in the creation of new goods and services. For example, AI can be used to design self-driving cars or personalize suggestions for customers.
Entertainment
AI is being used to develop new forms of entertainment, includingChatbots and virtual reality games. AI can also be used to improve the quality of currently available entertainment, for example by creating more compelling spectacular effects or interesting storylines.
Sports and fitness
AI is being used to provide more personalized fitness experiences and new sports training tools. For example, coaches using AI can provide instant feedback to players to help them perform better. AI can also be used to create virtual reality games that can be played for training or fun.
Self-improvement
AI is being used to develop tools that can help people improve their lives. For example, AI-powered tools can be used to monitor goal progress, provide encouragement, and offer guidance. AI can be used to create personalized learning experiences that support the acquisition of new skills.
Other current or planned uses of AI include
- Industry 4.0
- Software coding, documentation and test automation
- Cognitive modeling and logic programming
- Generation of text, images and videos on the Internet
- Recognition and processing of images
- Natural language recognition and translation
- Machine learning, pattern recognition
- Expert systems, question-answer systems, chatbots
- Chess computer programming
- Web search engines (e.g. Google, MS Bing)
- Assisted driving and navigation
- Artificial Neural Networks and Deep Learning
- Intelligent robots
- Education
- Knowledge representation
- Science and research (e.g., CERN)
- Space travel (e.g. Curiosity Mars Rover)
- Computer games (videogames)
- Computer Vision
- Future smartphones
Machine Learning (ML)
ML is a sub-area of artificial intelligence and refers to the learning of IT systems in order to improve themselves from data and patterns instead of being explicitly programmed.
ML can be used in healthcare, Industry 4.0 and autonomous driving (self-driving vehicles).
Quantum computer
Quantum computers are based on a different principle than conventional computers, they use the laws of quantum physics.
To build a quantum computer, one first needs computing and memory units. These so-called qubits are the quantum mechanical counterpart to the bits of conventional computers. Bits can assume exactly one of two possible states, which in the binary system are either zero or one. The qubit, on the other hand, can be in an intermediate state of zero and one for a certain period of time, the so-called coherence time. This state is also called superposition (quantum entanglement). Through a measurement, the qubit then transitions to one of the two clearly defined states, so that one can store the measurement result in a "classical" bit. This loss of superposition is called decoherence.
The computing unit of a quantum computer is therefore the qubit. In 2021, 127 qubits were achieved, in 2022 even 433. However, it is not only the quantity of qubits that is decisive, but above all their quality, i.e. the entanglements of the qubits and the coherence time that the quantum system remains stable in order to calculate - otherwise the information is lost in a noise.
A quantum computer, unlike a classical digital computer, can solve multiple tasks in parallel (inherent parallel processing). This makes it better suited for solving certain classes of problems and can solve them much faster than a conventional supercomputer.
Nevertheless, quantum computers will not be able to displace digital computers, but will coexist with them.
The first smaller commercial quantum computers are already in use.
Quantum computers are being developed by OpenAI, Microsoft, Google, Nvidia, Amazon, D-Wave, in China, but German companies are also involved in quantum computer development projects. One of the most powerful quantum computers in the world is located at the Jülich Research Center in Germany.
Examples of where quantum computers are used are optimization tasks, simulations, machine learning processes and quantumkryptology.
Bundling AI - Quantum Computing
There are efforts to connect AI with quantum computing. The already huge potential of these two cutting-edge IT- technologies will be further enhanced and their possible uses will be expanded even further.
Some experts believe that the merging of quantum computing and AI could enable exponential advances. If AI algorithms could run on quantum computers, the ability to quickly analyze vast amounts of heterogeneous data would be a huge leap forward.
Quantum computing promises not only massive speed increases, but also entirely new areas of application. The synergy between quantum computing and AI has the potential to fundamentally change many industries. For example, quantum simulation could be ideal for climate models to predict weather events based on millions of variables.
In scientific research, quantum simulations could be used to develop molecular behavior models. This would allow researchers to test and develop prototypes more quickly. In such a context, AI and quantum computing will form a powerful duo to address complex challenges such as climate change and health problems.
But they could also be useful in supply chain management, financial management and optimization problems.
While the benefits are enticing, caution is warranted. The rapid development in both technology areas also raises questions about governance, standardization and ethics.
The technology is fascinating, but it is important to see it in the context of what cannot be done with classical computer science today: quantum computers will not be able to solve everything and will not completely replace classical computers.
You need hybrid architectures and eventually a quantum processing unit alongside the central processing unit and the graphics processing unit in computers.
The quantum revolution is just unfolding. It was a scientific dream just a few years ago, but today it is becoming more real and tangible. The market opportunities are enormous!
By 2030, we could see the beginning of an era where quantum computers combined with artificial intelligence are no longer just part of the scientific discussion, but are also tangible tools for industry and research, as well as replacing humans in many areas.
Leading organizations such as the European Quantum Industry Confederation (QuIC) are already working on cross-industry education and standards development to facilitate this transition.
The integration of AI with blockchain technology could be another breakthrough combination. The two could complement each other in a way that brings out the best of both worlds. For example, Aptos and Microsoft recently teamed up to create an AI bot based on ChatGPT. In addition, according to IBM, the symbiosis of AI and blockchain can verify the origin of data. Thus, trust in data integrity and AI should be improved. Thus, blockchain should provide AI with more actionable insights.
In the healthcare industry, AI and blockchain can enable improved patient care and privacy through secure, shared patient data. In the pharmaceutical industry, they improve transparency and success of clinical trials through smart data analytics and decentralized organization. In the financial sector and supply chain, they enable faster transactions and greater efficiency. They also offer new opportunities for transparency, security and sustainability at the same time.
The combination of quantum computing, AI and blockchain is more than just an interesting scientific idea, it could be the next great technological revolution. But while the benefits and potentials are enormous, challenges remain in terms of hardware, ethics, and accessibility.
Newer trends in software development
The Rust, Go, and Kotlin languages are evolving and becoming more widely used, and the creation of mobile apps is playing an increasingly important role, as does software development in a cloud environment.
The agile software development, the software test automation, the continuous integration, the continuous delivery, DevOps and Microservices are also being increasingly used.
What is an Easter Egg?
An easter egg is an undocumented feature hidden in the software that can usually be invoked by a special key combination or input.
Warren Robinett is said to have hidden the first Easter egg in a computer game in the adventure game "Adventure". It was programmed in 1978 for the Atari 2600 game console.
In addition to applications, operating systems, browsers and games, Easter eggs can now also be found on numerous websites and even in mobile apps.
Examples:
- World 97: Pinball
- Excel 97: Flight simulator (controllable with the mouse)
To start the flight simulator you need a new one Open workbook and press F5 key and that are followed by the input of "X97:L97" are then made once the Shift key press and then is pressing Ctrl and the together Shift key require and then click on the Chart Wizard.
- Excel 2000: Car Game
- Excel 2010: Monkey Island game
- Media player Winamp:
Lamas are hidden with their heads facing music played. To see the animals is a tiny diamond velvet of a certain pressed key combination. The gem, in turn, is only at a special Window size visible.
- Skype:
By typing "drunk" and then pressing from Enter, a tangled, drowned one face appears.
- Chrome web browser:
A little falling down game to click letters in a given time are (type "zerg rush" and then select the first search result "Play Zerg Rush by Google")
Appearance of a Grogu (if type "grogu")
Meteorite animation (if type "meteorit")
Spirit level (if type "wasserwaage")
Rotating the display (If type "do a barrel roll")
Confetti animation (if type "neujahr")
Computer games
In 2018, a third of the world's population regularly spent time playing video games, be it on a smartphone, game console or PC.
The triumph of video games began in the 1970s, but has its roots around 20 years earlier. The history of video games is closely linked to the development of computers. In the 1950s, huge, space-filling computers solved relatively simple arithmetic problems. Playing with these giants was out of the question. And yet inventors at US universities developed simple computer games.
The first video game that was programmed purely for entertainment purposes is considered to be Tennis for Two from 1958. After that the game fell into oblivion for 20 years, but today it is considered the first video game ever.
From 1985, the first commercial online game, Island of Kesmai, was offered within the Compuserve network. With the commercialization of the Internet from the early 1990s, online games increasingly reached private households.
In 1972 the Atari company was formed. Not only was she to dominate the video game industry for the next decade, but she also developed the first globally successful game, Pong.
The golden age of arcades began with the game Space Invaders (1978). Prior to that, computing made major advances with the founding of Apple in 1976 and the development of microprocessors. And Atari managed another coup with the Atari 2600 home console: more than 30 million people bought the game console launched in 1977.
According to sales figures
- Minecraft (2011, sold over 200 million copies)
- Grand Theft Auto V (2013, sold approx. 180 million copies)
- Tetris (2006, sold 100 million copies)
According to download numbers
- Subway Surfers (over a billion downloads on Android)
- Candy Crush Saga (over a billion downloads on Android)
- Hill Climb Racing (over a billion downloads on Android)
Game consoles
Game consoles are processor-controlled functional units whose hardware and software is optimized for computer games. They can be stand alone units with or without a display. Game consoles without their own display can use a PC or television as a display.
The hardware of a games console with the graphics processor and all other components support the rapidly changing graphics as well as 3D graphics. The computing power of the graphics processor is extremely high and optimized for computer games. In order to make computer games appear as realistic as possible, game consoles work with virtual reality and artificial intelligence.
The most well-known game consoles are Playstation 4, Xbox and Nintendo Switch.
Computer jokes
● Murphy's law applied to computer programming: "Any finished program that runs is obsolete."
● The shortest programmer joke: I'm almost done!
● Why are programmers such bad dancers? Because they always have problems with the (program) steps!
● The world of computers is pretty nutritious: chips, cookies and, of course, hard discs.
● What do you get when a spider runs across the monitor? A web page!
● What do you call a bee from the United States of America? USB!
● How much space did Brexit free up in the EU? 1 GB!
● Why do computers wear glasses? To improve your web vision.
● How does a computer get drunk? With screenshots!
● Why does the computer sneeze? It has a virus!
● Where do computers dance? In the Disc-O!
● How many programmers does it take to change a light bulb? None! It's a hardware problem!
● Give a man a program then frustrate him for a day. Teach a man to program, frustrate him for a lifetime!
● There are 10 types of people in the world. Some understand the binary system and others don't.
● How does a computer scientist undress his girlfriend? => getStringFromObject();
● Why do women like object-oriented programmers? Because they have class!
● IT Support: Please close all the windows. User: Even the one in the bathroom?
● A computer scientist pushes a baby carriage through the park. An elderly couple approaches: "Boy or girl?" Computer scientist: "Right!"
● What does a computer scientist vampire leave behind? A Mega-bite!
● Windows: Couldn't find your keyboard! Press F1 for help!
● Why do computer scientists confuse Halloween and Christmas? Because oct(31) == dec(25)
● Problems of a computer scientist falling asleep...
while (!asleep) { sleep++; }
● What is the difference between a computer scientist and a physicist? The physicist believes that a Kilobyte is 1000 Bytes. The computer scientist believes that a Kilometer is 1024 meters.
● What does a computer scientist say when he is born? => "Hello world!"
● Wikipedia: I know everything! Google: I have everything! Facebook: I know everyone! Internet: You're nothing without me! Electricity: Keep talking...!
● Programmer to Spaniard: "Do you speak a programming language?" Spaniard: "C".
● What is 3.1415926536 squared? Pi pi.