Xampp All About

109
XAMPP From Wikipedia, the free encyclopedia XAMPP Developer(s) Apache Friends Stable release 1.7.4 / January 26, 2011; 6 months ago Operating system Cross-platform (Linux , Windows , Solaris , Mac OS X ) Type WAMP , MAMP , SAMP , LAMP License GPL Website www.apachefriends.org/en/ xampp.html XAMPP ( / ˈ z æ m p / or / ˈ ɛ k s . æ m p / [1] ) is a free and open source cross-platform web server solution stack package, consisting mainly of the Apache HTTP Server , MySQL database , and interpreters for scripts written in the PHP and Perl programming languages . Contents [hide ] 1 Etymology 2 Requirements and features 3 Use 4 See also 5 References 6 External links [edit ] Etymology

Transcript of Xampp All About

Page 1: Xampp All About

XAMPPFrom Wikipedia, the free encyclopedia

XAMPP

Developer(s) Apache FriendsStable release 1.7.4 / January 26, 2011; 6 months ago

Operating systemCross-platform (Linux, Windows, Solaris, Mac OS X)

Type WAMP, MAMP, SAMP, LAMPLicense GPLWebsite www.apachefriends.org/en/xampp.html

XAMPP (  / ̍ z æ m p / or / ̍ ɛ k s . æ m p / [1] ) is a free and open source cross-platform web server solution stack package, consisting mainly of the Apache HTTP Server, MySQL database, and interpreters for scripts written in the PHP and Perl programming languages.

Contents

[hide] 1 Etymology 2 Requirements and features

3 Use

4 See also

5 References

6 External links

[edit] Etymology

XAMPP's name is an acronym for:

X (to be read as "cross", meaning cross-platform) Apache HTTP Server

MySQL

PHP

Perl

Page 2: Xampp All About

The program is released under the terms of the GNU General Public License and acts as a free web server capable of serving dynamic pages. XAMPP is available for Microsoft Windows, Linux, Solaris, and Mac OS X, and is mainly used for web development projects. This software is useful while you are creating dynamic webpages using programming languages like PHP, JSP, Servlets.

[edit] Requirements and features

XAMPP requires only one zip, tar or exe file to be downloaded and run, and little or no configuration of the various components that make up the web server is required. XAMPP is regularly updated to incorporate the latest releases of Apache/MySQL/PHP and Perl. It also comes with a number of other modules including OpenSSL and phpMyAdmin.

Installing XAMPP takes less time than installing each of its components separately. Self-contained, multiple instances of XAMPP can exist on a single computer, and any given instance can be copied from one computer to another.

It is offered in both a full, standard version and a smaller version.

[edit] Use

Officially, XAMPP's designers intended it for use only as a development tool, to allow website designers and programmers to test their work on their own computers without any access to the Internet. To make this as easy as possible, many important security features are disabled by default.[2] In practice, however, XAMPP is sometimes used to actually serve web pages on the World Wide Web. A special tool is provided to password-protect the most important parts of the package.

XAMPP also provides support for creating and manipulating databases in MySQL and SQLite among others.

Once XAMPP is installed you can treat your localhost like a remote host by connecting using an FTP client. Using a program like FileZilla has many advantages when installing a content management system (CMS) like Joomla. You can also connect to localhost via FTP with your HTML editor.

The default FTP user "newuser", the default FTP password is "wampp".

The default MySQL user is "root" while there is no default MySQL password.

Apache HTTP ServerFrom Wikipedia, the free encyclopedia

Page 3: Xampp All About

This article relies on references to primary sources or sources affiliated with the subject, rather than references from independent authors and third-party publications. Please add more appropriate citations from reliable sources. (April 2011)

Apache HTTP Server

Original author(s) Robert McCoolDeveloper(s) Apache Software FoundationInitial release 1995[1]

Stable release2.2.19 / May 22, 2011; 2 months ago

Preview release2.3.14-beta / August 9, 2011; 1 day ago

Written in COperating system Cross-platformAvailable in EnglishType Web serverLicense Apache License 2.0Website http://httpd.apache.org/

The Apache HTTP Server, commonly referred to as Apache (/ ə ̍ p æ t ʃ iː / ), is web server software notable for playing a key role in the initial growth of the World Wide Web.[2] In 2009 it became the first web server software to surpass the 100 million website milestone.[3] Apache was the first viable alternative to the Netscape Communications Corporation web server (currently known as Oracle iPlanet Web Server), and has since evolved to rival other web servers in terms of functionality and performance. Typically Apache is run on a Unix-like operating system.[4]

Apache is developed and maintained by an open community of developers under the auspices of the Apache Software Foundation. The application is available for a wide variety of operating systems, including Unix, GNU, FreeBSD, Linux, Solaris, Novell NetWare, AmigaOS, Mac OS X, Microsoft Windows, OS/2, TPF, and eComStation. Released under the Apache License, Apache is open-source software.

Apache was originally based on NCSA HTTPd code. The NCSA code has since been removed from Apache, as part of a rewrite.

Since April 1996 Apache has been the most popular HTTP server software in use. As of May 2011 Apache was estimated to serve 63% of all websites and 66% of the million busiest.[5]

Contents

[hide] 1 Features

Page 4: Xampp All About

2 Performance

3 See also

4 References

5 Further reading

6 External links

[edit] Features

Apache supports a variety of features, many implemented as compiled modules which extend the core functionality. These can range from server-side programming language support to authentication schemes. Some common language interfaces support Perl, Python, Tcl, and PHP. Popular authentication modules include mod_access, mod_auth, mod_digest, and mod_auth_digest, the successor to mod_digest. A sample of other features include SSL and TLS support (mod_ssl), a proxy module (mod_proxy), a URL rewriter (also known as a rewrite engine, implemented under mod_rewrite), custom log files (mod_log_config), and filtering support (mod_include and mod_ext_filter).

Popular compression methods on Apache include the external extension module, mod_gzip, implemented to help with reduction of the size (weight) of web pages served over HTTP. ModSecurity is an open source intrusion detection and prevention engine for web applications. Apache logs can be analyzed through a web browser using free scripts such as AWStats/W3Perl or Visitors.

Virtual hosting allows one Apache installation to serve many different actual websites. For example, one machine with one Apache installation could simultaneously serve www.example.com, www.test.com, test47.test-server.test.com, etc.

Apache features configurable error messages, DBMS-based authentication databases, and content negotiation. It is also supported by several graphical user interfaces (GUIs).

It supports password authentication and digital certificate authentication. Apache has a built in search engine and an HTML authorizing tool and supports FTP.

[edit] Performance

This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2009)

Although the main design goal of Apache is not to be the "fastest" web server, Apache does have performance comparable to other "high-performance" web servers. Instead of implementing a single architecture, Apache provides a variety of MultiProcessing Modules (MPMs) which allow

Page 5: Xampp All About

Apache to run in a process-based, hybrid (process and thread) or event-hybrid mode, to better match the demands of each particular infrastructure. This implies that the choice of correct MPM and the correct configuration is important. Where compromises in performance need to be made, the design of Apache is to reduce latency and increase throughput, relative to simply handling more requests, thus ensuring consistent and reliable processing of requests within reasonable time-frames.

The Apache version considered by the Apache Foundation as providing high-performance is the multi-threaded version which mixes the use of several processes and several threads per process.[6]

While this architecture works faster than the previous multi-process based topology (because threads have a lower overhead than processes), it does not match the performances of the event-based architecture provided by other servers, especially when they process events with several worker threads.

This difference can be easily explained by the overhead that one thread per connection brings (as opposed to a couple of worker threads per CPU, each processing many connection events). Each thread needs to maintain its own stack, environment, and switching from one thread to another is also an expensive task for CPUs.

DatabaseFrom Wikipedia, the free encyclopedia

A database is an organized collection of data for one or more purposes, usually in digital form. The data are typically organized to model relevant aspects of reality (for example, the availability of rooms in hotels), in a way that supports processes requiring this information (for example, finding a hotel with vacancies). The term "database" refers both to the way its users view it, and to the logical and physical materialization of its data, content, in files, computer memory, and computer data storage. This definition is very general, and is independent of the technology used. However, not every collection of data is a database; the term database implies that the data is managed to some level of quality (measured in terms of accuracy, availability, usability, and resilience) and this in turn often implies the use of a general-purpose Database management system (DBMS). A general-purpose DBMS is typically a complex software system that meets many usage requirements, and the databases that it maintains are often large and complex.

The term database is correctly applied to the data and data structures, and not to the DBMS which is a software system used to manage the data. The structure of a database is generally too complex to be handled without its DBMS, and any attempt to do otherwise is very likely to result in database corruption. DBMSs are packaged as computer software products: well-known and highly utilized products include the Oracle DBMS, Access and SQL Server from Microsoft, DB2 from IBM and the Open source DBMS MySQL. Each such DBMS product currently supports many thousands of databases all over the world. The stored data in a database is not generally

Page 6: Xampp All About

portable across different DBMS, but can inter-operate to some degree (while each DBMS type controls a database of its own database type) using standards like SQL and ODBC. A successful general-purpose DBMS is designed in such a way that it can satisfy as many different applications and application designers as possible. A DBMS also needs to provide effective run-time execution to properly support (e.g., in terms of performance, availability, and security) as many end-users (the database's application users) as needed. Sometimes the combination of a database and its respective DBMS is referred to as a Database system (DBS).

A database is typically organized according to general Data models that have evolved since the late 1960s. Notable are the Relational model (all the DBMS types listed above support databases based on this model), the Entity-relationship model (ERM; primarily utilized to design databases), and the Object model (which has more expressive power than the relational, but is more complicated and less commonly used). Some recent database products use XML as their data model. A single database may be viewed for convenience within different data models that are mapped between each other (e.g., mapping between ERM and RM is very common in the database design process, and supported by many database design tools, often within the DBMS itself). Many DBMSs support one data model only, externalized to database developers, but some allow different data models to be used and combined.

The design and maintenance of a complex database requires specialist skills: the staff performing this function are referred to as database application programmers (different from the DBMS developers/programmers) and database administrators, and their task is supported by tools provided either as part of the DBMS or as free-standing (stand-alone) software products. These tools include specialized Database languages including Data Description Languages, Data Manipulation Languages, and Query Languages. These can be seen as special-purpose programming languages, tailored specifically to manipulate databases; sometimes they are provided as extensions of existing programming languages, with added special database commands. Database languages are generally specific to one data model, and in many cases they are specific to one DBMS type. The most widely supported standard database language is SQL, which has been developed for the relational model and combines the roles of Data Description Language, Data manipulation language, and a Query language.

A way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, multimedia objects, etc. Another way is by their application area, for example: Accounting, Music compositions, Banking, Manufacturing, Insurance, etc.

Contents

[hide]

1 Overview 2 History

o 2.1 The database concept

o 2.2 Evolution of database and DBMS technology

Page 7: Xampp All About

2.2.1 General-purpose DBMS

2.2.1.1 Types of people involved

2.2.2 Database machines and appliances

o 2.3 Database research

3 Database type examples

o 3.1 Active database

o 3.2 Analytical database

o 3.3 Cloud database

o 3.4 Data warehouse

o 3.5 Distributed database

o 3.6 Document-oriented database

o 3.7 Embedded database

o 3.8 End-user database

o 3.9 External database

o 3.10 Graph database

o 3.11 Hypermedia databases

o 3.12 In-memory database

o 3.13 Knowledge base

o 3.14 Operational database

o 3.15 Parallel database

o 3.16 Real-time database

o 3.17 Spatial database

o 3.18 Temporal database

4 Major database usage requirements

o 4.1 Functional requirements

4.1.1 Defining the structure of data: Data modeling and Data definition languages

Page 8: Xampp All About

4.1.2 Manipulating the data: Data manipulation languages and Query languages

4.1.3 Protecting the data: Database security

4.1.4 Describing processes that use the data: Workflow and Business process modeling

o 4.2 Operational requirements

4.2.1 Availability

4.2.2 Performance

4.2.3 Isolation between users

4.2.4 Recovery from failure and disaster

4.2.5 Backup and restore

4.2.6 Data independence

5 Major database functional areas

o 5.1 Data models

5.1.1 Early data models

5.1.1.1 Hierarchical model

5.1.1.2 Network model

5.1.1.3 Inverted file model

5.1.2 Relational model

5.1.3 Entity-relationship model

5.1.4 Object model

5.1.5 Object relational model

5.1.6 XML as a database data model

5.1.7 Other database models

o 5.2 Database languages

5.2.1 SQL for the Relational model

5.2.2 OQL for the Object model

5.2.3 XQuery for the XML model

Page 9: Xampp All About

o 5.3 Database architecture

o 5.4 Database security

5.4.1 Access control

5.4.2 Data security

5.4.3 Database audit

o 5.5 Database design

5.5.1 Entities and relationships

5.5.2 Database normalization

o 5.6 Database building, maintaining, and tuning

o 5.7 Miscellaneous areas

5.7.1 Database migration between DBMSs

6 Implementation: Database management systems

o 6.1 DBMS architecture: major DBMS components

o 6.2 Database storage

6.2.1 The data

6.2.1.1 Coding the data and Error-correcting codes

6.2.1.2 Data compression

6.2.1.3 Data encryption

6.2.2 Data storage types

6.2.2.1 Storage metrics

6.2.2.2 Data storage devices and their interfaces

6.2.2.3 Protecting storage device content: Device replication and RAID

6.2.3 Database storage layout

6.2.3.1 Database storage hierarchy

6.2.3.2 Data structures

6.2.3.3 Application data and DBMS data

6.2.3.4 Database indexing

Page 10: Xampp All About

6.2.3.5 Database data clustering

6.2.3.6 Database materialized views

6.2.3.7 Database object replication

o 6.3 Database transactions

6.3.1 The ACID rules

6.3.2 Isolation, concurrency control, and locking

o 6.4 Query optimization

o 6.5 DBMS support for the development and maintenance of a database and its application

7 See also

8 References

9 Further reading

10 External links

[edit] Overview

The following many brief sections explain what a Database is. The explanation is carried out by demonstrating examples of various database types, describing the motivation for developing the database concept since the 1960s, outlining major requirements that databases typically need to meet, and then major functional topics of databases. Finally, in the section on Database management systems (DBMSs), it is briefly described how needed database requirements are met by contemporary technology.

Most (brief) sections are backed-up by Wikipedia main articles linked in the sections, which provide more thorough descriptions of the respective subjects. These articles may point to additional Wikipedia articles for further refinement of the subjects and extensive coverage of the database area.

[edit] History

[edit] The database concept

The database concept has evolved since the 1960s to ease increasing difficulties in designing, building, and maintaining complex information systems (typically with many concurrent end-users, and with a diverse large amount of data). It has evolved together with the evolvement of Database management systems (DBMSs) which enable the effective handling of databases. Though the terms database and DBMS define different entities, they are inseparable: A

Page 11: Xampp All About

database's properties are determined by its supporting DBMS and vice-versa. The Oxford English dictionary cites a 1962 technical report as the first to use the term "database." With the progress in technology in the areas of processors, computer memory, computer storage. and computer networks, the sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitudes. For decades it has been unlikely that a complex information system can be built effectively without a proper database supported by a DBMS.

No widely accepted exact definition exists for DBMS. However, a system needs to provide considerable functionality to qualify as a DBMS. Accordingly its supported data collection needs to meet respective usability requirements (broadly defined by the requirements below) to qualify as a database. Thus, a database and its supporting DBMS are defined here by a set of general requirements listed below. Virtually all existing mature DBMS products meet these requirements to a great extent, while less mature either meet them or converge to meet them.

[edit] Evolution of database and DBMS technologySee also History in Database management system

The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing.

In the earliest database systems, efficiency was perhaps the primary concern, but it was already recognized that there were other important objectives. One of the key aims was to make the data independent of the logic of application programs, so that the same data could be made available to different applications.

The first generation of database systems were navigational,[1] applications typically accessed data by following pointers from one record to another. The two main data models at this time were the hierarchic model, epitomized by IBM's IMS system, and the Codasyl model (Network model), implemented in a number of products such as IDMS.

The Relational model, first proposed in 1970, departed from this tradition by insisting that applications should search for data by content, rather than by following links. This was considered necessary to allow the content of the database to evolve without constant rewriting of applications. Relational systems placed heavy demands on processing resources, and it was not until the mid 1980s that computing hardware became powerful enough to allow them to be widely deployed. By the early 1990s, however, relational systems were dominant for all large-scale data processing applications, and they remain dominant today (2011) except in niche areas. The dominant database language is the standard SQL for the Relational model, which has influenced database languages also for other data models.

Because the relational model emphasizes search rather than navigation, it does not make relationships between different entities explicit in the form of pointers, but represents them rather using primary keys and foreign keys. While this is a good basis for a query language, it is less well suited as a modeling language. For this reason a different model, the Entity-relationship model which emerged shortly later (1976), gained popularity for database design.

Page 12: Xampp All About

In the period since the 1970s database technology has kept pace with the increasing resources becoming available from the computing platform: notably the rapid increase in the capacity and speed (and reduction in price) of disk storage, and the increasing capacity of main memory. This has enabled ever larger databases and higher throughputs to be achieved.

The rigidity of the relational model, in which all data is held in tables with a fixed structure of rows and columns, has increasingly been seen as a limitation when handling information that is richer or more varied in structure than the traditional 'ledger-book' data of corporate information systems: for example, document databases, engineering databases, multimedia databases, or databases used in the molecular sciences. Various attempts have been made to address this problem, many of them gathering under banners such as post-relational or NoSQL. Two developments of note are the Object database and the XML database. The vendors of relational databases have fought off competition from these newer models by extending the capabilities of their own products to support a wider variety of data types.

[edit] General-purpose DBMS

A DBMS has evolved into a complex software system and its development typically requires thousands of person-years of development effort. Some general-purpose DBMSs, like Oracle, Microsoft SQL server, and IBM DB2, have been in on-going development and enhancement for thirty years or more. General-purpose DBMSs aim to satisfy as many applications as possible, which typically makes them even more complex than special-purpose databases. However, the fact that they can be used "off the shelf", as well as their amortized cost over many applications and instances, makes them an attractive alternative (Vs. one-time development) whenever they meet an application's requirements.

Though attractive in many cases, a general-purpose DBMS is not always the optimal solution: When certain applications are pervasive with many operating instances, each with many users, a general-purpose DBMS may introduce unnecessary overhead and too large "footprint" (too large amount of unnecessary, unutilized software code). Such applications usually justify dedicated development. Typical examples are email systems, though they need to possess certain DBMS properties: email systems are built in a way that optimizes email messages handling and managing, and do not need significant portions of a general-purpose DBMS functionality.

[edit] Types of people involved

Three types of people are involved with a general-purpose DBMS:

1. DBMS developers - This are the people that design and build the DBMS product, and the only ones who touch its code. They are typically the employees of a DBMS vendor (e.g., Oracle, IBM, Microsoft), or, in the case of Open source DBMSs (e.g., MySQL), volunteers or people supported by interested companies and organizations. They are typically skilled systems programmers. DBMS development is a complicated task, and some of the popular DBMSs have been under development and enhancement (also to follow progress in technology) for decades.

2. Application developers and Database administrators - These are the people that design and build an application that uses the DBMS. The latter group members design the needed database and maintain it. The first group members write the needed application programs which the

Page 13: Xampp All About

application comprises. Both are well familiar with the DBMS product and use its user interfaces (as well as usually other tools) for their work. Sometimes the application itself is packaged and sold as a separate product, which may include the DBMS inside (see Embedded database; subject to proper DBMS licensing), or sold separately as an add-on to the DBMS.

3. Application's end-users (e.g., accountants, insurance people, medical doctors, etc.) - This people know the application and its end-user interfaces, but need neither to know nor to understand the underlying DBMS. Thus, though comprising the intended and main beneficiaries of a DBMS, they are only indirectly involved with it.

[edit] Database machines and appliances

Main article: Database machine

In the 1970s and 1980s attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine. Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).

[edit] Database research

Database research has been an active and diverse area, with many specializations, carried out since the early days of dealing with the database concept in the 1960s. It has strong ties with database technology and DBMS products. Database research has taken place at research and development groups of companies (e.g., notably at IBM Research, who contributed technologies and ideas virtually to any DBMS existing today), research institutes, and Academia. Research has been done both through Theory and Prototypes. The interaction between research and database related product development has been very productive to the database area, and many related key concepts and technologies emerged from it. Notable are the Relational and the Entity-relationship models, the Atomic transaction concept and related Concurrency control techniques, Query languages and Query optimization methods, RAID, and more. Research has provided deep insight to virtually all aspects of databases. Along their history DBMSs and respective databases, to a great extent, have been the outcome of such research, while real product requirements and challenges triggered database research directions and sub-areas.

The database research area has several notable dedicated academic journals (e.g., ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE, and more) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE, and more), as well as an active and quite heterogeneous (subject-wise) research community all over the world.

Page 14: Xampp All About

[edit] Database type examples

This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2011)

The following are examples of various database types. Some of them are not main-stream types, but most of them have received special attention (e.g., in research) due to end-user requirements. Some of them exist as specialized DBMS products, and some of the functionality types that some of them provide have been incorporated in existing general-purpose DBMSs.

[edit] Active databaseMain article: Active database

An active database is a database that includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization.

Most modern relational databases include active database features in the form of database trigger.

[edit] Analytical database

Analysts may do their work directly against a data warehouse or create a separate analytic database for Online Analytical Processing (OLAP). For example, a company might extract sales records for analyzing the effectiveness of advertising and other sales promotions at an aggregate level.

[edit] Cloud database

A Cloud database is a database that rely on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud," while its applications are both developed and later maintained and utilized by (application's) end-users through a Web browser and Open APIs. More and more such database products are emerging, both of new vendors and virtually by all established database vendors.

[edit] Data warehouseMain article: Data warehouse

Data warehouses archive data from operational databases and often from external sources such as market research firms. Often operational data undergoes transformation on its way into the warehouse, getting summarized, anonymized, reclassified, etc. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted

Page 15: Xampp All About

from internal product codes to use UPCs so that it can be compared with ACNielsen data. Some basic and essential components of data warehousing include retrieving, analyzing, and mining data, transforming,loading and managing data so as to make it available for further use.

Operations in a data warehouse are typically concerned with bulk data manipulation, and as such, it is unusual and inefficient to target individual rows for update, insert or delete. Bulk native loaders for input data and bulk SQL passes for aggregation are the norm.

[edit] Distributed databaseMain article: Distributed database

The definition of a distributed database is broad, and may be utilized in different meanings. Usually it refers to spatial distribution of a database and possibly the DBMS over computers and sometimes over different sites.

Examples are databases of local work-groups and departments at regional offices, branch offices, manufacturing plants and other work sites. These databases can include segments of both common operational and common user databases, as well as data generated and used only at a user’s own site.

[edit] Document-oriented databaseMain article: Document-oriented database

This section requires expansion.

Utilized to conveniently store, manage, edit and retrieve documents.

[edit] Embedded databaseMain article: Embedded database

An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in a way that the DBMS is “hidden” from the application’s end-user and requires little or no ongoing maintenance. It is actually a broad technology category that includes DBMSs with differing properties and target markets. The term "embedded database" can be confusing because only a small subset of embedded database products is used in real-time embedded systems such as telecommunications switches and consumer electronics devices.[2]

[edit] End-user database

These databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases. Some of them are much simpler than full fledged DBMSs, with more elementary DBMS functionality (e.g., not supporting multiple concurrent end-users on a same database), with basic programming interfaces, and a relatively small "foot-

Page 16: Xampp All About

print" (not much code to run as in "regular" general-purpose databases). However, also available general-purpose DBMSs can often be used for such purpose, if they provide basic user-interfaces for straightforward database applications (limited query and data display; no real computer programming needed), while still enjoying the database qualities and protections that these DBMSs can provide.

[edit] External database

These databases contain data collected for use across multiple organizations, either freely or via subscription. The Internet Movie Database is one example.

[edit] Graph databaseMain article: Graph database

This section requires expansion.

A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.

[edit] Hypermedia databases

The World Wide Web can be thought of as a database, albeit one spread across millions of independent computing systems. Web browsers "process" this data one page at a time, while Web crawlers and other software provide the equivalent of database indexes to support search and other activities.

[edit] In-memory databaseMain article: In-memory database

An in-memory database (IMDB; also main memory database or MMDB) is a database that primarily resides in main memory, but typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases. Accessing data in memory reduces the I/O reading activity when, for example, querying the data. In applications where response time is critical, such as telecommunications network equipment, main memory databases are often used.[3]

[edit] Knowledge baseMain article: Knowledge base

A knowledge base (abbreviated KB, kb or Δ[4][5]) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences.

Page 17: Xampp All About

[edit] Operational database

These databases store detailed data about the operations of an organization. They are typically organized by subject matter, process relatively high volumes of updates using transactions. Essentially every major organization on earth uses such databases. Examples include customer databases that record contact, credit, and demographic information about a business' customers, personnel databases that hold information such as salary, benefits, skills data about employees, Enterprise resource planning that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.

[edit] Parallel databaseMain article: Parallel database

This section requires expansion.

[edit] Real-time databaseMain article: Real time database

This section requires expansion.

[edit] Spatial databaseMain article: Spatial database

This section requires expansion.

[edit] Temporal databaseMain article: Temporal database

This section requires expansion.

[edit] Major database usage requirements

This section requires expansion.

The major purpose of a database is to provide the information system (in its broadest sense) that utilizes it with the information the system needs according to its own requirements. A certain broad set of requirements refines this general goal. These database requirements translate to requirements for the respective DBMS, to allow conveniently building a proper database for the given application. If this goal is met by a DBMS, then the designers and builders of the specific database can concentrate on the application's aspects, and not deal with building and maintaining the underlying DBMS. Also, since a DBMS is complex and expensive to build and maintain, it is

Page 18: Xampp All About

not economical to build such a new tool (DBMS) for every application. Rather it is desired to provide a flexible tool for handling databases for as many as possible given applications, i.e., a general-purpose DBMS.

[edit] Functional requirements

Certain general functional requirements need to be met in conjunction with a database. They describe what is needed to be defined in a database for any specific application.

[edit] Defining the structure of data: Data modeling and Data definition languages

The database needs to be based on a data model that is sufficiently rich to describe in the database all the needed respective application's aspects. A Data definition language exists to describe the databases within the data model. Such language is typically data model specific.

[edit] Manipulating the data: Data manipulation languages and Query languages

A database data model needs support by a sufficiently rich Data manipulation language to allow all database manipulations and information generation (from the data) as needed by the respective application. Such language is typically data model specific.

[edit] Protecting the data: Database security

The DB needs build-in security means to protect its content (and users) from dangers of unauthorized users (either humans or programs). Protection is also provided from types of unintentional breach.

[edit] Describing processes that use the data: Workflow and Business process modeling

Main articles: Workflow and Business process modeling

Manipulating database data often involves processes of several interdependent steps, at different times (e.g., when different people's interactions are involved; e.g., generating an insurance policy). Data manipulation languages are typically intended to describe what is needed in a single such step. Dealing with multiple steps typically requires writing quite complex programs. Most applications are programmed using common programming languages and software development tools. However the area of process description has evolved in the frameworks of workflow and business processes with supporting languages and software packages which considerably simplify the tasks. Traditionally these frameworks have been out of the scope of common DBMSs, but utilization of them has become common-place, and often they are provided as add-on's to DBMSs.

[edit] Operational requirements

Operational requirements are needed to be met by a database in order to effectively support an application when operational. Though it typically may be expected that operational requirements

Page 19: Xampp All About

are automatically met by a DBMS, in fact it is not so in most of the cases: To be met substantial work of design and tuning is typically needed by database administrators. This is typically done by specific instructions/operations through special database user interfaces and tools, and thus may be viewed as secondary functional requirements (which are not less important than the primary).

[edit] Availability

A DB should maintain needed levels of availability, i.e., the DB needs to be available in a way that a user's action does not need to wait beyond a certain time range before starting executing upon the DB. Availability also relates to failure and recovery from it (see Recovery from failure and disaster below): Upon failure and during recovery normal availability changes, and special measures are needed to satisfy availability requirements.

[edit] Performance

Users' actions upon the DB should be executed within needed time ranges.

[edit] Isolation between users

When multiple users access the database concurrently the actions of a user should be uninterrupted and unaffected by actions of other users. These concurrent actions should maintain the DB's consistency (i.e., keep the DB from corruption).

[edit] Recovery from failure and disaster

Main articles: Data recovery and Disaster recovery

All software systems, including DBMSs, are prone to failures for many reasons (both software and hardware related). Failures typically corrupt the DB, typically to the extent that it is impossible to repair it without special measures. The DBMS should provide automatic recovery from failure procedures that repair the DB and return it to a well defined state.

A different type of failure is due to a disaster, either by Nature (e.g., Earthquake, Flood, Tornado) or by Man (e.g., intentional physical systems' sabotage, destructive acts of war). Recovery from disasters (Disaster recovery), which typically incapacitate whole computer systems beyond repair (and different from software failure or hardware component failure) requires special protecting means.

[edit] Backup and restore

Main article: Backup

Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this a backup operation is done occasionally or continuously, where

Page 20: Xampp All About

each desired database state (i.e., the values of its data and their embedding in database's data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When this state is needed, i.e., when it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are utilized to restore that state.

[edit] Data independence

Main article: Data independence

Data independence pertains to a database's life cycle (see Database building, maintaining, and tuning below). It strongly impacts the convenience and cost of maintaining an application and its database, and has been the major motivation for the emergence and success of the Relational model, as well as the convergence to a common database architecture (see below). In general the term "data independence" means that changes in the database's structure do not require changes in its application's computer programs, and that changes in the database at a certain architectural level (see below) do not affect the database's levels above. Data independence is attainable to a great extent in contemporary DBMS, but of course it is not total for all database structural changes.

[edit] Major database functional areas

The functional areas are domains and subjects that have evolved in order to provide proper answers and solutions to the functional requirements above.

[edit] Data modelsMain article: Database model

A data model is an abstract structure that provides the means to effectively describe specific data structures needed to model an application. As such a data model needs sufficient expressive power to capture the needed aspects of applications. These applications are often typical to commercial companies and other organizations (like manufacturing, human-resources, stock, banking, etc.). For effective utilization and handling it is desired that a data model is relatively simple and intuitive. This may be in conflict with high expressive power needed to deal with certain complex applications. Thus any popular general-purpose data model usually well balances between being intuitive and relatively simple, and very detailed with high expressive power. The application's semantics is usually not explicitly expressed in the model, but rather implicit (and detailed by documentation external to the model) and hinted to by data item types' names (e.g., "part-number") and their connections (as expressed by generic data structure types provided by each specific model).

[edit] Early data models

These models were popular in the 1960s, 1970s, but nowadays can be found primarily in old legacy systems. They are characterized primarily by being navigational with strong connections between their logical and physical representations, and deficiencies in data independence.

Page 21: Xampp All About

[edit] Hierarchical model

Main article: Hierarchical database model

In the Hierarchical model different record types (representing real-world entities) are embedded in a predefined hierarchical (tree-like) structure. This hierarchy is used as the physical order of records in storage. Record access is done by navigating through the data structure using pointers combined with sequential accessing.

This model has been supported primarily by the IBM IMS DBMS, one of the earliest DBMSs. Various limitations of the model have been compensated at later IMS versions by additional logical hierarchies imposed on the base physical hierarchy.

[edit] Network model

Main article: Network model (database)

In this model a hierarchical relationship between two record types (representing real-world entities) is established via the set construct. A set consists of circular linked lists where one record type, the set owner or parent, appears once in each circle, and a second record type, the subordinate or child, may appear multiple times in each circle. In this way a hierarchy may be established between any two record types, e.g., type A is the owner of B. At the same time another set may be defined where B is the owner of A. Thus all the sets comprise a general directed graph (ownership defines a direction), or network construct. Access to records is either sequential (usually in each record type) or by navigation in the circular linked lists.

This model is more general and powerful than the hierarchical, and has been the most popular before being replaced by the Relational model. It has been standardized by CODASYL. Popular DBMS products that utilized it were Cincom Systems' Total and Cullinet's IDMS.

[edit] Inverted file model

Main article: Inverted index

An inverted file or inverted index of a first file, by a field in this file (the inversion field), is a second file in which this field is the key. A record in the second file includes a key and pointers to records in the first file where the inversion field has the value of the key. This is also the logical structure of contemporary database indexes. The related Inverted file data model utilizes inverted files of primary database files to efficiently directly access needed records in these files.

Notable for using this data model is the ADABAS DBMS of Software AG, introduced in 1970. ADABAS has gained considerable customer base and exists and supported until today. In the 1980s it has adopted the Relational model and SQL in addition to its original tools and languages.

[edit] Relational model

Page 22: Xampp All About

Main article: Relational model

This section requires expansion.

[edit] Entity-relationship model

Main article: Entity-relationship model

This section requires expansion.

[edit] Object model

Main article: Object database

This section requires expansion.

In recent years, the object-oriented paradigm has been applied in areas such as engineering and spatial databases, telecommunications and in various scientific domains. The conglomeration of object oriented programming and database technology led to this new kind of database. These databases attempt to bring the database world and the application-programming world closer together, in particular by ensuring that the database uses the same type system as the application program. This aims to avoid the overhead (sometimes referred to as the impedance mismatch) of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). At the same time, object databases attempt to introduce key ideas of object programming, such as encapsulation and polymorphism, into the world of databases.

A variety of these ways have been tried[by whom?] for storing objects in a database. Some products have approached the problem from the application-programming side, by making the objects manipulated by the program persistent. This also typically requires the addition of some kind of query language, since conventional programming languages do not provide language-level functionality for finding objects based on their information content. Others[which?] have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database programming language that allows full programming capabilities as well as traditional query facilities.

[edit] Object relational model

Main article: Object-relational database

This section requires expansion.

[edit] XML as a database data model

Page 23: Xampp All About

Main articles: XML database and XML

This section requires expansion.

[edit] Other database models

This section requires expansion.

Products offering a more general data model than the relational model are sometimes classified as post-relational.[6] Alternate terms include "hybrid database", "Object-enhanced RDBMS" and others. The data model in such products incorporates relations but is not constrained by E.F. Codd's Information Principle, which requires that

all information in the database must be cast explicitly in terms of values in relations and in no other way[7]

Some of these extensions to the relational model integrate concepts from technologies that pre-date the relational model. For example, they allow representation of a directed graph with trees on the nodes. The German company sones implements this concept in its GraphDB.

Some post-relational products extend relational systems with non-relational features. Others arrived in much the same place by adding relational features to pre-relational systems. Paradoxically, this allows products that are historically pre-relational, such as PICK and MUMPS, to make a plausible claim to be post-relational.

[edit] Database languagesMain articles: Data definition language, Data manipulation language, and Query language

Database languages are dedicated programming languages, tailored and utilized to

define a database (i.e., its specific data types and the relationships among them), manipulate its content (e.g., insert new data occurrences, and update or delete existing ones),

and

query it (i.e., compute and retrieve any information based on its data).

Database languages are data-model-specific. They typically have commands to instruct execution of the desired operations in the database. Each such command is equivalent to a complex expression (program) in a regular programming language, and thus programming in dedicated (database) languages simplifies the task of handling databases considerably. An expressions in a database language is automatically transformed (by a compiler or interpreter, as regular programming languages) to a proper computer program that runs while accessing the database and providing the needed results. The following are notable examples:

[edit] SQL for the Relational model

Page 24: Xampp All About

Main article: SQL

This section requires expansion.

A major Relational model language supported by all the relational DBMSs and a standard.

[edit] OQL for the Object model

Main article: OQL

This section requires expansion.

An Object model language standard (by the Object Data Management Group) that has influenced the design of some of the newer query languages like JDOQL and EJB QL, though they cannot be considered as different flavors of OQL.

[edit] XQuery for the XML model

Main articles: XQuery and XML

This section requires expansion.

XQuery is an XML based database language (also named XQL).

[edit] Database architectureSee also Data independence

Database architecture (to be distinguished from DBMS architecture; see below) may be viewed, to some extent, as an extension of Data modeling. It is used to conveniently answer requirements of different end-users from a same database, as well as for other benefits. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but not other many details about employees, that are the interest of the human resources department. Thus different departments need different views of the company's database, that both include the employees' payments, possibly in a different level of detail (and presented in different visual forms). To meet such requirement effectively database architecture consists of three levels: external, conceptual and internal. Clearly separating the three levels was a major feature of the relational database model implementations that dominates 21st century databases.[8]

The external level defines how each end-user type understands the organization of its respective relevant data in the database, i.e., the different needed end-user views. A single database can have any number of views at the external level.

The conceptual level unifies the various external views into a coherent whole, global view.[8] It provides the common-denominator of all the external views. It comprises all the end-user

Page 25: Xampp All About

needed generic data, i.e., all the data from which any view may be derived/computed. It is provided in the simplest possible way of such generic data, and comprises the back-bone of the database. It is out of the scope of the various database end-users, and serves database application developers and defined by database administrators that build the database.

The Internal level (or Physical level) is as a matter of fact part of the database implementation inside a DBMS (see Implementation section below). It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the conceptual level, provides supporting storage-structures like indexes, to enhance performance, and occasionally stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in attempt to optimize the overall database usage by all its end-uses according to the database goals and priorities.

All the three levels are maintained and updated according to changing needs by database administrators who often also participate in the database design (see below).

The above three-level database architecture also relates to and being motivated by the concept of Data independence (see requirement above) which has been described for long time as a desired database property and was one of the major initial driving forces of the Relational model. In the context of the above architecture it means that changes made at a certain level do not affect definitions and software developed with higher level interfaces, and are being incorporated at the higher level automatically. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which saves substantial change work that would be needed otherwise.

In summary, the conceptual is a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it is uncomplicated by details of how the data is stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation (see Implementation section below), requires a different level of detail and uses its own data structure types, typically different in nature from the structures of the external and conceptual levels which are exposed to DBMS users (e.g., the data models above): While the external and conceptual levels are focused on and serve DBMS users, the concern of the internal level is effective implementation details.

[edit] Database securityMain article: Database security

Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program).

The following are major areas of database security (among many others).

Page 26: Xampp All About

[edit] Access control

Main article: Access control

Database access control deals with controlling who (a person or a certain computer program) is allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or utilizing specific access paths to the former (e.g., using specific indexes or other data structures to access information).

Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.

[edit] Data security

Main articles: Data security and Encryption

The definition of data security varies and may overlap with other database security aspects. Broadly it deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see Physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see Data encryption).

[edit] Database audit

Main article: Database audit

Database audit primarily involves monitoring that no security breach, in all aspects, has happened. If security breach is discovered then all possible corrective actions are taken.

[edit] Database designMain article: Database design

Database design is done before building it to meet needs of end-users within a given application/information-system that the database is intended to support. The database design defines the needed data and data structures that such a database comprises. A design is typically carried out according to the common three architectural levels of a database (see Database architecture above). First, the conceptual level is designed, which defines the over-all picture/view of the database, and reflects all the real-world elements (entities) the database intends to model, as well as the relationships among them. On top of it the external level, various views of the database, are designed according to (possibly completely different) needs of specific end-user types. More external views can be added later. External views requirements may modify the design of the conceptual level (i.e., add/remove entities and relationships), but usually a well designed conceptual level for an application well supports most of the needed external views. The conceptual view also determines the internal level (which primarily deals with data layout in storage) to a great extent. External views requirement may add supporting

Page 27: Xampp All About

storage structures, like indexes, for enhanced performance. Typically the internal layer is optimized for top performance, in an average way that takes into account performance requirements (possibly conflicting) of different external views according to their relative importance. While the conceptual and external levels design can usually be done independently of any DBMS (DBMS-independent design software packages exist, possibly with interfaces to some specific popular DBMSs), the internal level design highly relies on the capabilities and internal data structure of the specific DBMS utilized.

A common way to carry out conceptual level design is to use the Entity-relationship model (ERM) (both the basic one, and with possible enhancement that it has gone over), since it provides a straightforward, intuitive perception of an application's elements. An alternative approach, which preceded the ERM, is using the Relational model and dependencies (mathematical relationships) among data to normalize the database, i.e., to define the ("optimal") relations (data tupple types) in the database. Though a large body of research exists for this method it is more complex, less intuitive, and not more effective than the ERM method. Thus normalization is less utilized in practice than the ERM method.

Another aspect of database design is its security. It involves both defining access control to database objects (e.g., Entities, Views) as well as defining security levels and methods for the data itself (See Database security above).

[edit] Entities and relationships

Main article: Entity-relationship model

The most common database design methods are based on the Entity relationship model (ERM, or ER model). This model views the world in a simplistic but very powerful way: It consists of "Entities" and the "Relationships" among them. Accordingly a database consists of entity and relationship types, each with defined attributes (field types) that model concrete entities and relationships. Modeling a database in this way typically yields an effective one with desired properties (as in some normal forms; see normalization below). Such model can be translated to any other data model (e.g., Relational model) required by any specific DBMS for building an effective database.

[edit] Database normalization

Main article: Database normalization

In the design of a relational database, the process of organizing database relations to minimize redundancy is called normalization. The goal is to produce well-structured relations so that additions, deletions, and modifications of a field can be made in just one relation (table) without worrying about appearance and update of the same field in other relations. The process is algorithmic and based on dependencies (mathematical relations) that exist among relations' field types. The process result is bringing the database relations into a certain "normal form". Several normal forms exist with different properties.

Page 28: Xampp All About

[edit] Database building, maintaining, and tuningDatabase life-cycle

Main article: Database tuning

After designing a database for an application arrives the stage of building the database. Typically an appropriate general-purpose DBMS can be selected to be utilized for this purpose. A DBMS provides the needed user interfaces to be utilized by database administrators (e.g., Data definition language interface) to define the needed application's data structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.).

When the database is ready (all its data structures and other needed components are defined) it is typically populated with initial application's data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases the database becomes operational while empty from application's data, and data are accumulated along its operation.

After completing building the database and making it operational arrives the database maintenance stage: Various database parameters may need changes and tuning for better performance, application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc.

[edit] Miscellaneous areas

[edit] Database migration between DBMSs

See also Database migration in Data migration

A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations it is desirable to move, migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership-TCO), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels (see Database architecture above) should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This in spite of the fact that tools may exist to help migration between specific DBMS. Typically a DBMS vendor provides tools to help importing databases from other popular DBMSs.

[edit] Implementation: Database management systems

or How database usage requirements are met

Page 29: Xampp All About

Main article: Database management system

A database management system (DBMS) is a system that allows to build and maintain databases, as well as to utilize their data and retrieve information from it. A DBMS defines the database type that it supports, as well as its functionality and operational capabilities. A DBMS provides the internal processes for external applications built on them. The end-users of some such specific application are usually exposed only to that application and do not directly interact with the DBMS. Thus end-users enjoy the effects of the underlying DBMS, but its internals are completely invisible to end-users. Database designers and database administrators interact with the DBMS through dedicated interfaces (like Data definition language interface) to build and maintain the applications' databases, and thus need some more knowledge and understanding about how DBMSs operate and the DBMSs' external interfaces and tuning parameters.

A DBMS consists of software that operates databases, providing storage, access, security, backup and other facilities to meet needed requirements. DBMSs can be categorized according to the database model(s) that they support, such as relational or XML, the type(s) of computer they support, such as a server cluster or a mobile phone, the query language(s) that access the database, such as SQL or XQuery, performance trade-offs, such as maximum scale or maximum speed or others. Some DBMSs cover more than one entry in these categories, e.g., supporting multiple query languages. Examples of some commonly used DBMS are MySQL, PostgreSQL, Microsoft Access, SQL Server, Oracle, Sybase, etc. Database software typically support the Open Database Connectivity (ODBC) standard which allows the database to integrate (to some extent) with other databases.

The development of a mature general-purpose DBMS typically takes several years and many man-years. Developers of DBMS typically update their product to follow and take advantage of progress in computer and storage technologies. Several DBMS products like Oracle and IBM DB2 have been in on-going development since the 1970s-1980s. Since DBMSs comprise a significant economical market, computer and storage vendors often take into account DBMS requirements in their own development plans.

[edit] DBMS architecture: major DBMS components

DBMS architecture specifies its components and their interfaces. DBMS architecture is distinct from Database architecture described above. The following are major DBMS components:

DBMS external interfaces - They are the means to communicate with the DBMS to perform all the operations on a database, like defining data types, assigning security levels, updating data, querying the database, etc. An interface can be either a user interface (e.g., typically for a database administrator), or an application programming interface (API) used for communication between an application program and the DBMS.

Database language engines (or processors) - Most operations upon databases are performed through expression in Database languages (see above). Languages exist for data definition, data manipulation and queries (e.g., SQL), as well as for specifying various aspects of security, and more. Language expressions (the language "sentences" which typically consist of "words" and parameters, as in programming languages) are fed into a DBMS through proper interfaces. A

Page 30: Xampp All About

language engine processes the language expressions (by a compiler or language interpreter) to extract the intended database operations from the expression in a way that they can be executed by the DBMS.

Query optimizer - Performs query optimization (see section below) on every query to choose for it the most efficient query plan (a partial order (tree) of operations) to be executed to compute the query result.

Database engine - Performs the received database operations on the database objects, typically at their higher-level representation.

Storage engine - translates the operations to low-level operations on the storage bits (see below). In some references the Storage engine is viewed as part of the Database engine.

Transaction engine - for correctness and reliability purposes most DBMS internal operations are performed encapsulated in transactions (see below). Transactions can also be specified externally to the DBMS to encapsulate a group of operations. The transaction engine tracks all the transactions and manages their execution according to the transaction rules (e.g., proper concurrency control, and proper commit or abort for each).

Security engine - Manages all database security aspects within the DBMS. Interprets and performs security-related operations, and performs all needed other security tasks.

[edit] Database storageMain article: Computer data storage

Database storage is the container of the physical materialization of a database. It comprises the Internal (physical) level in the Database architecture above. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the Conceptual level and External level from the Internal level when needed. It is not part of the DBMS but rather manipulated by the DBMS (by its Storage engine; see above) to manage the database that resides in it. Though typically accessed by a DBMS through the underlying Operating system (and often utilizing the operating systems' File systems as intermediates for storage layout), storage properties and configuration setting are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., Computer memory and external Computer data storage), as dictated by contemporary computer technology. The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look in the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).

In principle the database storage (as computer data storage in general) can be viewed as a linear address space (a tree-like is a more accurate description), where every bit of data has its unique address in this address space. Practically only a very small percentage of addresses is kept as initial reference points (which also requires storage), and most of the database data is accessed

Page 31: Xampp All About

by indirection using displacement calculations (distance in bits from the reference points) and data structures (see below) which define access paths (using pointers) to all needed data in effective manner, optimized for the needed data access operations.

[edit] The data

[edit] Coding the data and Error-correcting codes

Main articles: Code, Character encoding, Error detection and correction, and Cyclic redundancy check

Data is encoded by assigning a bit pattern to each language alphabet character, digit, other numerical patterns, and multimedia object. Many standards exist for encoding (e.g., ASCII, JPEG, MPEG-4).

By adding bits to each encoded unit, the redundancy allows both to detect errors in coded data and to correct them based on mathematical algorithms. Errors occur regularly in low probabilities due to random bit value flipping, or "physical bit fatigue," loss of the physical bit in storage its ability to maintain distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A group of malfunctioning physical bits (not always the specific defective bit is known; group definition depends on specific storage device) is typically automatically fenced-out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The Cyclic redundancy check (CRC) method is typically used in storage for error detection.

[edit] Data compression

Main article: Data compression

Data compression methods allow in many cases to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This allows to utilize substantially less storage (tens of percents) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data in a database compressed or not.

Data compression is typically controlled through the DBMS's data definition interface, but in some cases may be a default and automatic.

[edit] Data encryption

Main article: Cryptography

For security reasons certain types of data (e.g., credit-card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots (taken either via unforeseen vulnerabilities in a DBMS, or more likely, by bypassing it).

Page 32: Xampp All About

Data encryption is typically controlled through the DBMS's data definition interface, but in some cases may be a default and automatic.

[edit] Data storage types

As it is common in current computer technology the database consists of bits (a binary-digit which has two states: either 0 or 1). This collection of bits describes both the contained database data and its related metadata (i.e., data that describes the contained data and allows computer programs to manipulate the database data correctly). The size of a database can nowadays be tens of Terabytes, where a byte is eight bits. The physical materialization of a bit can employ various existing technologies, while new and improved technologies are constantly under development. Common examples are:

Magnetic medium (e.g., in Magnetic disk) - Orientation of magnetic regions on a surface of material (two directions, for 0 and 1).

Dynamic random-access memory (DRAM) - State of a miniature electronic circuit consists of few transistors (two states for 0 and 1).

These two examples are respectively for two major storage types:

Nonvolatile storage can maintain its bit states (0s and 1s) without electrical power supply, or when power supply is interrupted;

Volatile storage loses its bit values when power supply is interrupted (i.e., its content is erased).

Sophisticated storage units, which can, in fact, be effective dedicated parallel computers that support a large amount of nonvolatile storage, typically must include also components with volatile storage. Some such units employ batteries that can provide power for several hours in case of external power interruption (e.g., see the EMC Symmetrix) and thus maintain the content of the volatile storage parts intact. Just before such a device's batteries lose their power the device typically automatically backs-up its volatile content portion (into nonvolatile) and shuts off to protect its data.

Databases are usually too expensive (in terms of importance and needed investment in resources, e.g., time, money, to build them) to be lost by a power interruption. Thus at any point in time most of their content resides in nonvolatile storage. Even if for operational reason very large portions of them reside in volatile storage (e.g., tens of Gigabytes in computer volatile memory, for in-memory databases), most of this is backed-up in nonvolatile storage. A relatively small portion of this, which temporarily may not have nonvolatile backup, can be reconstructed by proper automatic database recovery procedures after volatile storage content loss.

More examples of storage types:

Volatile storage can be found in processors, computer memory (e.g., DRAM), etc. Non-volatile storage types include ROM, EPROM, Hard disk drives, Flash memory and drives,

Storage arrays, etc.

[edit] Storage metrics

Page 33: Xampp All About

This section requires expansion.

Databases always use several types of storage when operational (and implied several when idle). Different types may significantly differ in their properties, and the optimal mix of storage types is determined by the types and quantities of operations that each storage type needs to perform, as well as considerations like physical space and energy consumption and dissipation (which may become critical for a large database). Storage types can be categorized by the following attributes:

Volatile/Nonvolatile. Cost of the medium (e.g., per Megabyte), Cost to operate (cost of energy consumed per unit

time).

Access speed (e.g., bytes per second).

Granularity - from fine to coarse (e.g., size in bytes of access operation).

Reliability (the probability of spontaneous bit value change under various conditions).

Maximal possible number of writes (of any specific bit or specific group of bits; could be constrained by the technology used (e.g., "write once" or "write twice"), or due to "physical bit fatigue," loss of ability to distinguish between the 0, 1 states due to many state changes (e.g., in Flash memory)).

Power needed to operate (Energy per time; energy per byte accessed), Energy efficiency, Heat to dissipate.

Packaging density (e.g., realistic number of bytes per volume unit)

[edit] Data storage devices and their interfaces

Main article: Data storage device

Storage devices can be categorized also by the following common packaging types:

Application-specific integrated circuit (ASIC) - typically comprises the smallest storage components available, and can employ various technologies (e.g., ROM, RAM, EEPROM, Flash memory, etc.); typical components inside computers, Drives, and storage arrays.

Drive - an enclosure that may include several ASIC components, more electronic circuity, and possibly mechanical components (e.g., Hard disk drive, Tape drive, Flash memory drive); may be very sophisticated with elaborated functionality.

Storage array - Typically includes multiple (replaceable) drives; most with considerable additional computing power (by processors and fast internal network) to effectively manage the drives collectively, comprising in fact dedicated storage computers; the most utilized form of external (to a computer) storage technology (e.g., the EMC Symmetrix).

Respective interface types

Page 34: Xampp All About

ASIC components typically employ private interfaces and are usually embedded in printed circuit boards (PCBs) that may employ standard computer bus interfaces (e.g., Conventional PCI, PCI Express, etc.). Drives and storage arrays employ standard storage interfaces (e.g., Fibre channel, iSCSI, SATA, etc.). A single storage array may employ multiple interfaces of different types.

Interfaces by data units (block, file, and object)

While most ASIC devices typically enable data manipulation at the bit level, many of them provide commands for moving whole groups of bits, sometimes quite large. Drive and array interfaces typically move large groups of bits called blocks. Some array types provide beyond blocks also file system and types of object interfaces built on top of the underlying block abstraction.

[edit] Protecting storage device content: Device replication and RAID

Main article: RAID

See also Disk storage replication

While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:

Device replication - A common solution to the problem is constantly maintaining an identical copy of device content on another device (typically of a same type). The down side is that this doubles the storage, and both devices (copies) need to be updated simultaneously with some overhead and possibly some delays. The up side is possible concurrent read of a same data group by two independent processes, which increases performance. When one of the replicated devices is detected to be defective, the other copy is still operational, and is being utilized to generate a new copy on another (replaced, but usually available operational stand-by) device.

Redundant array of independent disks (RAID) - This method generalizes the device replication above by allowing one device in a group of N devices to fail and be replaced with content restored (Device replication is RAID with N=2). RAID groups of N=5 or N=6 are common. N>2 saves storage, when comparing with N=2, at the cost of more processing during both regular operation (with often reduced performance) and defective device replacement.

[edit] Database storage layout

Database bits are laid-out in storage in data-structures and grouping that can take advantage of both known effective algorithms to retrieve and manipulate them and the storage own properties. Typically the storage itself is design to meet requirements of various areas that extensively utilize storage, including databases. An active DBMS always simultaneously utilize several storage types (e.g., Computer memory, and external Computer data storage), with respective layout methods.

[edit] Database storage hierarchy

Page 35: Xampp All About

This section requires expansion.

A database, while in operation, resides simultaneously in several types of storage. By the nature of contemporary computers most of the active database part inside a computer resides (partially replicated) in volatile storage (e.g., computer memory).

Observation: Currently correlation exists between speed and cost.

Data access path - storage hierarchy by speed (cost).

E.g., Registers - Memory - Cache (hierarchy) - Flash - Magnetic disk - Magnetic tape - Optical disk

[edit] Data structures

Main article: Database storage structures

This section requires expansion.

A data structure is an abstract construct that embeds data in a well defined manner. An efficient data structure allows to manipulate the data in efficient ways. The data manipulation may include data insertion, deletion, updating and retrieval in various modes. A certain data structure type may be very effective in certain operations, and very ineffective in others. A data structure type is selected upon DBMS development to best meet the operations needed for the types of data it contains. Type of data structure selected for a certain task typically also takes into consideration the type of storage it resides in (e.g., speed of access, minimal size of storage chunk accessed, etc.). In some DBMSs database administrators have the flexibility to select among options of data structures to contain user data for performance reasons. Sometimes the data structures have selectable parameters to tune the database performance.

Databases may store data in many data structure types.[9] Common examples are the following:

ordered/unordered flat files hash tables

B+ trees

ISAM

heaps

[edit] Application data and DBMS data

A typical DBMS cannot store the data of the application it serves alone. In order to handle the application data the DBMS need to store this data in data structures that comprise specific data by themselves. In addition the DBMS needs its own data structures and many types of

Page 36: Xampp All About

bookkeeping data like indexes and logs. The DBMS data is an integral part of the database and may comprise a substantial portion of it.

[edit] Database indexing

Main article: Index (database)

Indexing is a technique for improving database performance. The many types of indexes share the common property that they eliminate the need to examine every entry when running a query. In large databases, this can reduce query time/cost by orders of magnitude. The simplest form of index is a sorted list of values that can be searched using a binary search with an adjacent reference to the location of the entry, analogous to the index in the back of a book. The same data can have multiple indexes (an employee database could be indexed by last name and hire date.)

Indexes affect performance, but not results. Database designers can add or remove indexes without changing application logic, reducing maintenance costs as the database grows and database usage evolves.

Given a particular query, the DBMS' query optimizer is responsible for devising the most efficient strategy for finding matching data. The optimizer decides which index or indexes to use, how to combine data from different parts of the database, how to provide data in the order requested, etc.

Indexes can speed up data access, but they consume space in the database, and must be updated each time the data is altered. Indexes therefore can speed data access but slow data maintenance. These two properties determine whether a given index is worth the cost.

[edit] Database data clustering

(Also has been referred to as Record clustering)

In many cases substantial performance improvement is gained if different types of database objects that are usually utilized together are laid in storage in proximity, being clustered. This usually allows to retrieve needed related objects from storage in minimum number of input operations (each sometimes substantially time consuming). Even for in-memory databases clustering provides performance advantage due to common utilization of large caches for input-output operations in memory, with similar resulting behavior.

For example it may be beneficial to cluster a record of an item in stock with all its respective order records. The decision of whether to cluster certain objects or not depends on the objects' utilization statistics, object sizes, caches sizes, storage types, etc. In a relational database clustering the two respective relations "Items" and "Orders" results in saving the expensive execution of a Join operation between the two relations whenever such a join is needed in a query (the join result is already ready in storage by the clustering, available to be utilized).

[edit] Database materialized views

Page 37: Xampp All About

Main article: Materialized view

Often storage redundancy is employed to increase performance. A common example is storing materialized views, which are frequently needed External views (see Database architecture above). Storing such external views saves expensive computing of them each time they are needed. The related overhead for maintaining them, besides storage redundancy, is updating them whenever their original data is changed. Analysis of overall saving versus cost determines whether a materialized view is generated or not.

[edit] Database object replication

Main article: Database replication

See also Replication below

Occasionally a database employs storage redundancy by database objects replication to increase data availability (both to improve performance of simultaneous multiple end-user accesses to a same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of replicated objects need to be synchronized.

[edit] Database transactionsMain article: Database transaction

As with every software system, a DBMS that operates in a faulty computing environment is prone to failures of many kinds. A failure can corrupt the respective database unless special measures are taken to prevent this. A DBMS achieves certain levels of fault tolerance by encapsulating in database transactions work (execution of programs) performed upon the respective database. The concept of a database transaction (or atomic transaction) has evolved in order to enable both a well understood database system behavior in a faulty environment where crashes can happen any time, and recovery from a crash to a well understood database state. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands).

[edit] The ACID rules

Main article: ACID

Every database transaction obeys the following rules (by support in the DBMS; i.e., a DBMS is designed to guarantee them for the transactions it runs):

Atomicity - Either the effects of all or none of its operations remain ("all or nothing" semantics) when a transaction is completed (committed or aborted respectively). In other words, to the outside world a committed transaction appears (by its effects on the database) to be indivisible,

Page 38: Xampp All About

atomic, and an aborted transaction does not leave effects on the database at all, as if never existed.

Consistency - Every transaction must leave the database in a consistent (correct) state, i.e., maintain the predetermined integrity rules of the database (constraints upon and among the database's objects). A transaction must transform a database from one consistent state to another consistent state (it is the responsibility of the transaction's programmer to make sure that the transaction itself is correct, i.e., performs correctly what it intends to perform while maintaining the integrity rules). Thus since a database can be normally changed only by transactions, all the database's states are consistent. An aborted transaction does not change the state.

Isolation - Transactions cannot interfere with each other. Moreover, usually the effects of an incomplete transaction are not visible to another transaction. Providing isolation is the main goal of concurrency control.

Durability - Effects of successful (committed) transactions must persist through crashes (typically by recording the transaction's effects and its commit event in a non-volatile memory).

[edit] Isolation, concurrency control, and locking

Main articles: Concurrency control, Isolation (database systems), and Two-phase locking

Isolation provides to a database user the perception that no other users operate upon the database at the same time and can interfere with that user, though practically many thousands of users can execute operations upon the database concurrently. The issue is not just perception but rather the correctness and consistency of both the database itself and the information it provides to applications and end-users.

Concurrency control comprises the underlying mechanisms in a DBMS which handle isolation and guarantee related correctness. It is heavily utilized by the Database and Storage engines (see above) both to guarantee the correct execution of concurrent transactions, and (different mechanisms) the correctness of other DBMS processes. The transaction-related mechanisms typically constrain the database data access operations' timing (transaction schedules) to certain orders characterized as the Serializability and Recoverabiliry schedule properties. Constraining database access operation execution typically means reduced performance (rates of execution), and concurrency control mechanisms are typically designed to provide the best performance possible. Often, when possible without harming correctness, the serializability property is compromised for better performance.

Locking is the most common transaction concurrency control method in DBMSs that provides both serializability and recoverability for correctness. In order to access a database object a transaction first needs to acquire a lock for this object. Depending on the access operation type (e.g., reading or writing an object) and on the lock type (several types usually exist) acquiring the lock may be blocked and postponed, if another transaction is holding a blocking lock for that database object. The locking method has many variants.

[edit] Query optimizationMain articles: Query optimization and Query optimizer

Page 39: Xampp All About

A query is a request for information from a database. It can be as simple as "finding the address of a person with SS# 123-123-1234," or more complex like "finding the average salary of all the employed people in California between the ages 30 to 39." Queries results are generated by accessing relevant database data and manipulating it in a way that yields the requested information. Since database structures are complex, in most cases, and especially for not-very-simple queries, the needed data for a query can be collected from a database by accessing it in different ways, through different data-structures, and in different orders (different Query plans). Each different way typically requires different processing time. Processing times of a same query may have large variance, from a fraction of a second to hours, depending on the way selected. The purpose of query optimization, which is an automated process, is to find the way to process a given query in minimum time. The large possible variance in time justifies performing query optimization, though finding the exact optimal way to execute a query, among all possibilities, is typically very complex, time consuming by itself, may be too costly, and often practically impossible. Thus query optimization typically tries to approximate the optimum by comparing several common-sense alternatives (query plans; still, usually in a large number) to provide in a reasonable time a "good enough" plan which typically does not deviate much (percentage-wise) from the exact optimum. This approach is effective and for many queries may make the difference in execution times between seconds to hours, or between minutes to days.

[edit] DBMS support for the development and maintenance of a database and its applicationThis section requires expansion.

A DBMS typically intends to provide convenient environment to develop and later maintain an application built around its respective database type. A DBMS either provides such tools, or allows integration with such external tools. Examples for tools relate to database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc.

PHPFrom Wikipedia, the free encyclopedia

This article is about the scripting language. For other uses, see PHP (disambiguation).

PHP

Page 40: Xampp All About

Paradigmimperative, object-oriented, Procedural, reflective

Appeared in 1995; 16 years ago[1]

Designed by Rasmus Lerdorf

Developer The PHP Group

Stable release5.3.6 (March 17, 2011; 4 months ago)

Preview release5.4.0alpha3 (August 4, 2011; 7 days ago)

Typing discipline Dynamic, weak

Major implementations

Zend Engine, Phalanger, Quercus, Project Zero, HipHop

Influenced by C, Perl, Java, C++, Tcl [1]

Influenced RadPHP (formerly PHP4Delphi)

Implementation language

C

OS Cross-platform

License PHP License

Usual filename extensions

.php, .phtml .php4 .php3 .php5 .phps

Website php.net

PHP Programming at Wikibooks

Page 41: Xampp All About

PHP is a general-purpose scripting language originally designed for web development to produce dynamic web pages. For this purpose, PHP code is embedded into the HTML source document and interpreted by a web server with a PHP processor module, which generates the web page document. It also has evolved to include a command-line interface capability and can be used in standalone graphical applications.[2] PHP can be deployed on most web servers and as a standalone interpreter, on almost every operating system and platform free of charge.[3] A competitor to Microsoft's Active Server Pages (ASP) server-side script engine[4] and similar languages, PHP is installed on more than 20 million websites and 1 million web servers.[5]

PHP was originally created by Rasmus Lerdorf in 1995. The main implementation of PHP is now produced by The PHP Group and serves as the de facto standard for PHP as there is no formal specification.[6] PHP is free software released under the PHP License which is incompatible with the GNU General Public License (GPL) due to restrictions on the usage of the term PHP.[7]

While PHP originally stood for "Personal Home Page", it is now said to stand for "PHP: Hypertext Preprocessor", a recursive acronym.[8]

Contents

[hide]

1 History o 1.1 Licensing

o 1.2 Release history

2 Usage

3 Security

4 Syntax

o 4.1 Data types

o 4.2 Functions

4.2.1 PHP 5.2 and earlier

4.2.2 PHP 5.3 and newer

o 4.3 Objects

4.3.1 Visibility of properties and methods

5 Speed optimization

6 Compilers

Page 42: Xampp All About

7 Resources

8 See also

9 Notes

10 External links

History

Rasmus Lerdorf, who wrote the original Common Gateway Interface component, and Andi Gutmans and Zeev Suraski, who rewrote the parser that formed PHP 3

PHP development began in 1994 when the Danish/Greenlandic programmer Rasmus Lerdorf initially created a set of Perl scripts he called "Personal Home Page Tools" to maintain his personal homepage. The scripts performed tasks such as displaying his résumé and recording his web-page traffic.[6][9][10] Lerdorf initially announced the release of PHP on the comp.infosystems.www.authoring.cgi Usenet discussion group on June 8, 1995.[11]

Zeev Suraski and Andi Gutmans, two Israeli developers at the Technion IIT, rewrote the parser in 1997 and formed the base of PHP 3, changing the language's name to the recursive initialism PHP: Hypertext Preprocessor.[6] Afterwards, public testing of PHP 3 began, and the official launch came in June 1998. Suraski and Gutmans then started a new rewrite of PHP's core, producing the Zend Engine in 1999.[12] They also founded Zend Technologies in Ramat Gan, Israel.[6]

In 2008 PHP 5 became the only stable version under development. Late static binding had been missing from PHP and was added in version 5.3.[13][14]

A new major version has been under development alongside PHP 5 for several years. This version was originally planned to be released as PHP 6 as a result of its significant changes, which included plans for full Unicode support. However, Unicode support took developers much longer to implement than originally thought, and the decision was made in March 2010[15] to move the project to a branch, with features still under development moved to trunk.

Page 43: Xampp All About

Changes in the new code include the removal of register_globals,[16] magic quotes, and safe mode.[17][18] The reason for the removals was that register_globals had given way to security holes, and the use of magic quotes had an unpredictable nature, and was best avoided. Instead, to escape characters, magic quotes may be replaced with the addslashes() function, or more appropriately an escape mechanism specific to the database vendor itself like mysql_real_escape_string() for MySQL. Functions that will be removed in future versions and have been deprecated in PHP 5.3 will produce a warning if used.[19]

Many high-profile open-source projects ceased to support PHP 4 in new code as of February 5, 2008, because of the GoPHP5 initiative,[20] provided by a consortium of PHP developers promoting the transition from PHP 4 to PHP 5.[21][22]

As of 2011 PHP does not have native support for Unicode or multibyte strings; Unicode support is under development for a future version of PHP and will allow strings as well as class-, method-, and function-names to contain non-ASCII characters.[23][24]

PHP interpreters are available on both 32-bit and 64-bit operating systems, but on Microsoft Windows the only official distribution is a 32-bit implementation, requiring Windows 32-bit compatibility mode while using Internet Information Services (IIS) on a 64-bit Windows platform. Experimental 64-bit versions of PHP 5.3.0 were briefly available for MS Windows, but have since been removed.[25]

Licensing

PHP is free software released under the PHP License, which insists that:[26]

4. Products derived from this software may not be called "PHP", nor may "PHP" appear in their name, without prior written permission from [email protected]. You may indicate that your software works in conjunction with PHP by saying "Foo for PHP" instead of calling it "PHP Foo" or "phpfoo"

This restriction on use of the name PHP makes it incompatible with the GNU General Public License (GPL).[27]

Release history

Meaning

Red Release no longer supported

Green Release still supported

Blue Future release

Major version

Minor version

Release date

Notes

Page 44: Xampp All About

1 1.0.01995-06-08

Officially called "Personal Home Page Tools (PHP Tools)". This is the first use of the name "PHP".[6]

2 2.0.01997-11-01

Considered by its creator as the "fastest and simplest tool" for creating dynamic web pages.[6]

3 3.0.01998-06-06

Development moves from one person to multiple developers. Zeev Suraski and Andi Gutmans rewrite the base for this version.[6]

4

4.0.02000-05-22

Added more advanced two-stage parse/execute tag-parsing system called the Zend engine.[28]

4.1.02001-12-10

Introduced 'superglobals' ($_GET, $_POST, $_SESSION, etc.)[28]

4.2.02002-04-22

Disabled register_globals by default. Data received over the network is not inserted directly into the global namespace anymore, closing possible security holes in applications.[28]

4.3.02002-12-27

Introduced the CLI, in addition to the CGI.[28][29]

4.4.02005-07-11

Added man pages for phpize and php-config scripts.[28]

4.4.92008-08-07

Security enhancements and bug fixes. The last release of the PHP 4.4 series.[30][31]

55.0.0

2004-07-13

Zend Engine II with a new object model.[32]

5.1.02005-11-24

Performance improvements with introduction of compiler variables in re-engineered PHP Engine.[32]

5.2.02006-11-02

Enabled the filter extension by default. Native JSON support.[32]

5.2.172011-01-06

Fix of critical vulnerability connected to floating point.

5.3.0 2009-06-30

Namespace support; Late static bindings, Jump label (limited goto), Native closures, Native PHP archives (phar), garbage collection for circular references, improved Windows support, sqlite3, mysqlnd as a replacement for libmysql as underlying library for the extensions that work with MySQL, fileinfo as a replacement for mime_magic for better

Page 45: Xampp All About

MIME support, the Internationalization extension, and deprecation of ereg extension.

5.3.12009-11-19

Over 100 bug fixes, some of which were security fixes.

5.3.22010-03-04

Includes a large number of bug fixes.

5.3.32010-07-22

Mainly bug and security fixes; FPM SAPI.

5.3.42010-12-10

Mainly bug and security fixes; improvements to FPM SAPI.

5.3.52011-01-06

Fix of critical vulnerability connected to floating point.

5.3.62011-03-10

Over 60 bug fixes that were reported in the previous version.

5.4.0alpha22011-07-14

Removed items: register_globals, safe_mode, allow_call_time_pass_reference, session_register(), session_unregister() and session_is_registered(). Several improvements to existing features.

6 ?.?No date set

The development of PHP 6 has been delayed because the developers have decided the current approach to handling of instance unicode is not a good one, and are considering alternate ways in the next version of PHP. The updates that were intended for PHP 6 were added to PHP 5.4.0 instead.

Beginning on June 28th, 2011, the PHP Group began following a timeline for when new versions of PHP will be released. [33] Under this timeline, at least one release should occur every month. Every one year, a minor release should occur which can include new features. Every minor release should at least have 2 years of security and bug fixes, followed by at least 1 year of only security fixes, for a total of a 3 year release process for every minor release. No new features (unless small self contained) will be introduced into a minor release during the 3 year release process.

Usage

PHP is a general-purpose scripting language that is especially suited to server-side web development where PHP generally runs on a web server. Any PHP code in a requested file is

Page 46: Xampp All About

executed by the PHP runtime, usually to create dynamic web page content or dynamic images used on web sites or elsewhere.[34] It can also be used for command-line scripting and client-side GUI applications. PHP can be deployed on most web servers, many operating systems and platforms, and can be used with many relational database management systems (RDBMS). It is available free of charge, and the PHP Group provides the complete source code for users to build, customize and extend for their own use.[3]

PHP primarily acts as a filter,[35] taking input from a file or stream containing text and/or PHP instructions and outputs another stream of data; most commonly the output will be HTML. Since PHP 4, the PHP parser compiles input to produce bytecode for processing by the Zend Engine, giving improved performance over its interpreter predecessor.[36]

Originally designed to create dynamic web pages, PHP now focuses mainly on server-side scripting,[37] and it is similar to other server-side scripting languages that provide dynamic content from a web server to a client, such as Microsoft's Asp.net, Sun Microsystems' JavaServer Pages,[38] and mod_perl. PHP has also attracted the development of many frameworks that provide building blocks and a design structure to promote rapid application development (RAD). Some of these include CakePHP, Symfony, CodeIgniter, and Zend Framework, offering features similar to other web application frameworks.

The LAMP architecture has become popular in the web industry as a way of deploying web applications. PHP is commonly used as the P in this bundle alongside Linux, Apache and MySQL, although the P may also refer to Python or Perl or some combination of the three. WAMP packages (Windows/ Apache/ MySQL / PHP) and MAMP packages (Mac OS X / Apache / MySQL / PHP) are also available.

As of April 2007, over 20 million Internet domains had web services hosted on servers with PHP installed and mod_php was recorded as the most popular Apache HTTP Server module.[39] PHP is used as the server-side programming language on 75% of all web servers.[40] Web content management systems written in PHP include MediaWiki,[41] Joomla, eZ Publish, WordPress,[42] Drupal [43] and Moodle.[44] All websites created using these tools are written in PHP, including the user-facing portion of Wikipedia, Facebook,[45] and Digg.[46]

Security

The National Vulnerability Database maintains a list of vulnerabilities found in computer software. The overall proportion of PHP-related vulnerabilities on the database amounted to: 20% in 2004, 28% in 2005, 43% in 2006, 36% in 2007, 35% in 2008, and 30% in 2009.[47] Most of these PHP-related vulnerabilities can be exploited remotely: they allow attackers to steal or destroy data from data sources linked to the webserver (such as an SQL database), send spam or contribute to DoS attacks using malware, which itself can be installed on the vulnerable servers.

These vulnerabilities are caused mostly by not following best practice programming rules: technical security flaws of the language itself or of its core libraries are not frequent (23 in 2008, about 1% of the total).[48][49] Recognizing that programmers cannot be trusted, some languages include taint checking to detect automatically the lack of input validation which induces many

Page 47: Xampp All About

issues. Such a feature is being developed for PHP,[50] but its inclusion in a release has been rejected several times in the past.[51][52]

Hosting PHP applications on a server requires careful and constant attention to deal with these security risks.[53] There are advanced protection patches such as Suhosin and Hardening-Patch, especially designed for web hosting environments.[54]

PHPIDS adds security to any PHP application to defend against intrusions. PHPIDS detects Cross-site scripting (XSS), SQL injection, header injection, Directory traversal, Remote File Execution, Local File Inclusion, Denial of Service (DoS).[55]

Syntax

Main article: PHP syntax and semantics

<!DOCTYPE html><html> <head> <meta charset="utf-8" /> <title>PHP Test</title> </head> <body> <?php echo 'Hello World'; /* echo("Hello World"); works as well, although echo is not a function, but a language construct. In some cases, such as when multiple parameters are passed to echo, parameters cannot be enclosed in parentheses. */ ?> </body></html>

Hello world program in PHP code embedded within HTML code

The PHP interpreter only executes PHP code within its delimiters. Anything outside its delimiters is not processed by PHP (although non-PHP text is still subject to control structures described within PHP code). The most common delimiters are <?php to open and ?> to close PHP sections. <script language="php"> and </script> delimiters are also available, as are the shortened forms <? or <?= (which is used to echo back a string or variable) and ?> as well as ASP-style short forms <% or <%= and %>. While short delimiters are used, they make script files less portable as support for them can be disabled in the PHP configuration, and so they are discouraged.[56] The purpose of all these delimiters is to separate PHP code from non-PHP code, including HTML.[57]

The first form of delimiters, <?php and ?>, in XHTML and other XML documents, creates correctly formed XML 'processing instructions'.[58] This means that the resulting mixture of PHP code and other markup in the server-side file is itself well-formed XML.

Page 48: Xampp All About

Variables are prefixed with a dollar symbol, and a type does not need to be specified in advance. Unlike function and class names, variable names are case sensitive. Both double-quoted ("") and heredoc strings allow the ability to embed a variable's value into the string.[59] PHP treats newlines as whitespace in the manner of a free-form language (except when inside string quotes), and statements are terminated by a semicolon.[60] PHP has three types of comment syntax: /* */ marks block and inline comments; // as well as # are used for one-line comments.[61] The echo statement is one of several facilities PHP provides to output text (e.g. to a web browser).

In terms of keywords and language syntax, PHP is similar to most high level languages that follow the C style syntax. if conditions, for and while loops, and function returns are similar in syntax to languages such as C, C++, Java and Perl.

Data types

PHP stores whole numbers in a platform-dependent range, either a 64-bit or 32-bit signed integer equivalent to the C-language long type. Unsigned integers are converted to signed values in certain situations; this behavior is different from other programming languages.[62] Integer variables can be assigned using decimal (positive and negative), octal, and hexadecimal notations. Floating point numbers are also stored in a platform-specific range. They can be specified using floating point notation, or two forms of scientific notation.[63] PHP has a native Boolean type that is similar to the native Boolean types in Java and C++. Using the Boolean type conversion rules, non-zero values are interpreted as true and zero as false, as in Perl and C++.[63] The null data type represents a variable that has no value. The only value in the null data type is NULL.[63] Variables of the "resource" type represent references to resources from external sources. These are typically created by functions from a particular extension, and can only be processed by functions from the same extension; examples include file, image, and database resources.[63] Arrays can contain elements of any type that PHP can handle, including resources, objects, and even other arrays. Order is preserved in lists of values and in hashes with both keys and values, and the two can be intermingled.[63] PHP also supports strings, which can be used with single quotes, double quotes, nowdoc or heredoc syntax.[64]

The Standard PHP Library (SPL) attempts to solve standard problems and implements efficient data access interfaces and classes.[65]

Functions

PHP has hundreds of base functions and thousands more via extensions. These functions are well documented on the PHP site; however, the built-in library has a wide variety of naming conventions and inconsistencies.[66] PHP currently has no functions for thread programming, although it does support multiprocess programming on POSIX systems.[67]

Additional functions can be defined by a developer:

function myFunction() { return 'John Doe';}

Page 49: Xampp All About

echo 'My name is ' . myFunction() . '!';

PHP 5.2 and earlier

Functions are not first-class functions and can only be referenced by their name, directly or dynamically by a variable containing the name of the function.[68] User-defined functions can be created at any time without being prototyped.[68] Functions can be defined inside code blocks, permitting a run-time decision as to whether or not a function should be defined. Function calls must use parentheses, with the exception of zero argument class constructor functions called with the PHP new operator, where parentheses are optional. PHP supports quasi-anonymous functions through the create_function() function, although they are not true anonymous functions because anonymous functions are nameless, but functions can only be referenced by name, or indirectly through a variable $function_name();, in PHP.[68]

PHP 5.3 and newer

PHP gained support for closures. True anonymous functions are supported using the following syntax:

function getAdder($x) { return function($y) use ($x) { return $x + $y; };} $adder = getAdder(8);echo $adder(2); // prints "10"

Here, the getAdder() function creates a closure using the parameter $x (the keyword use imports a variable from the lexical context), which takes an additional argument $y and returns it to the caller. Such a function is a first class object, that means, it can be stored, passed as a parameter to other functions, etc. For more details see Lambda functions and closures RFC.

The goto flow control statement is used as follows:

function lock() { $file = fopen('file.txt', 'r+'); retry: if (!flock($file, LOCK_EX)) { goto retry; } fwrite($file, 'Success!'); fclose($file); return 0;}

When flock() is called, PHP opens a file and tries to lock it. retry:, the target label, defines the point to which execution should return if flock() is unsuccessful and goto retry; is

Page 50: Xampp All About

called. goto is restricted and requires that the target label be in the same file and context.goto is supported since PHP 5.3.

Objects

Basic object-oriented programming functionality was added in PHP 3 and improved in PHP 4.[6] Object handling was completely rewritten for PHP 5, expanding the feature set and enhancing performance.[69] In previous versions of PHP, objects were handled like value types.[69] The drawback of this method was that the whole object was copied when a variable was assigned or passed as a parameter to a method. In the new approach, objects are referenced by handle, and not by value. PHP 5 introduced private and protected member variables and methods, along with abstract classes and final classes as well as abstract methods and final methods. It also introduced a standard way of declaring constructors and destructors, similar to that of other object-oriented languages such as C++, and a standard exception handling model. Furthermore, PHP 5 added interfaces and allowed for multiple interfaces to be implemented. There are special interfaces that allow objects to interact with the runtime system. Objects implementing ArrayAccess can be used with array syntax and objects implementing Iterator or IteratorAggregate can be used with the foreach language construct. There is no virtual table feature in the engine, so static variables are bound with a name instead of a reference at compile time.[70]

If the developer creates a copy of an object using the reserved word clone, the Zend engine will check if a __clone() method has been defined or not. If not, it will call a default __clone() which will copy the object's properties. If a __clone() method is defined, then it will be responsible for setting the necessary properties in the created object. For convenience, the engine will supply a function that imports the properties of the source object, so that the programmer can start with a by-value replica of the source object and only override properties that need to be changed.[71]

Basic example of object-oriented programming as described above:

class Person { public $firstName; public $lastName; public function __construct($firstName, $lastName = '') { //Optional parameter $this->firstName = $firstName; $this->lastName = $lastName; } public function greet() { return "Hello, my name is " . $this->firstName . " " . $this->lastName . "."; } static public function staticGreet($firstName, $lastName) { return "Hello, my name is " . $firstName . " " . $lastName . "."; }}

Page 51: Xampp All About

$he = new Person('John', 'Smith');$she = new Person('Sally', 'Davis');$other= new Person('Joe'); echo $he->greet(); // prints "Hello, my name is John Smith."echo '<br />';echo $she->greet(); // prints "Hello, my name is Sally Davis."echo '<br />';echo $other->greet(); // prints "Hello, my name is Joe ."echo '<br />';echo Person::staticGreet('Jane', 'Doe'); // prints "Hello, my name is Jane Doe."

Visibility of properties and methods

The visibility of PHP properties and methods refers to visibility in PHP. It is defined using the keywords public, private, and protected. The default is public, if only var is used; var is a synonym for public. Items declared public can be accessed everywhere. protected limits access to inherited classes (and to the class that defines the item). private limits visibility only to the class that defines the item.[72] Objects of the same type have access to each other's private and protected members even though they are not the same instance. PHP's member visibility features have sometimes been described as "highly useful."[73] However, they have also sometimes been described as "at best irrelevant and at worst positively harmful."[74]

Speed optimization

Main article: PHP accelerator

PHP source code is compiled on-the-fly to an internal format that can be executed by the PHP engine.[75][76] In order to speed up execution time and not have to compile the PHP source code every time the webpage is accessed, PHP scripts can also be deployed in executable format using a PHP compiler.

Code optimizers aim to enhance the performance of the compiled code by reducing its size, merging redundant instructions and making other changes that can reduce the execution time. With PHP, there are often opportunities for code optimization.[77] An example of a code optimizer is the eAccelerator PHP extension.[78]

Another approach for reducing compilation overhead for PHP servers is using an opcode cache. Opcode caches work by caching the compiled form of a PHP script (opcodes) in shared memory to avoid the overhead of parsing and compiling the code every time the script runs. An opcode cache, APC, will be built into an upcoming release of PHP.[79]

Opcode caching and code optimization can be combined for best efficiency, as the modifications do not depend on each other (they happen in distinct stages of the compilation).

Compilers

Page 52: Xampp All About

The PHP language was originally implemented as an interpreter. Several compilers have been developed which decouple the PHP language from the interpreter. Advantages of compilation include better execution speed, obfuscation, static analysis, and improved interoperability with code written in other languages.[80] PHP compilers of note include Phalanger, which compiles PHP into CIL byte-code, and HipHop, developed at Facebook and now available as open source, which transforms the PHP Script into C++, then compiles it, reducing server load up to 50%.

Resources

PHP includes free and open source libraries with the core build. PHP is a fundamentally Internet-aware system with modules built in for accessing FTP servers, many database servers, embedded SQL libraries such as embedded PostgreSQL, MySQL and SQLite, LDAP servers, and others. Many functions familiar to C programmers such as those in the stdio family are available in the standard PHP build.[81]

PHP allows developers to write extensions in C to add functionality to the PHP language. These can then be compiled into PHP or loaded dynamically at runtime. Extensions have been written to add support for the Windows API, process management on Unix-like operating systems, multibyte strings (Unicode), cURL, and several popular compression formats. Some more unusual features include integration with Internet Relay Chat, dynamic generation of images and Adobe Flash content, and even speech synthesis. The PHP Extension Community Library (PECL) project is a repository for extensions to the PHP language.[82]

Zend provides a certification exam for programmers to become certified PHP developers.

Programming languageFrom Wikipedia, the free encyclopedia

A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely.

The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field, with many more being created every year. Most programming languages describe computation in an imperative style, i.e., as a sequence of commands, although some languages, such as those that support functional programming or logic programming, use alternative forms of description.

A programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the

Programming languagelists

Alphabetical

Categorical

Chronological

Generational

Page 53: Xampp All About

C programming language is specified by an ISO Standard), while other languages, such as Perl, have a dominant implementation that is used as a reference.

Contents

[hide]

1 Definitions 2 Elements

o 2.1 Syntax

o 2.2 Semantics

2.2.1 Static semantics

2.2.2 Dynamic semantics

2.2.3 Type system

o 2.3 Standard library and run-time system

3 Design and implementation

o 3.1 Specification

o 3.2 Implementation

4 Usage

o 4.1 Measuring language usage

5 Taxonomies

6 History

o 6.1 Early developments

o 6.2 Refinement

o 6.3 Consolidation and growth

7 See also

8 References

9 Further reading

10 External links

Page 54: Xampp All About

[edit] Definitions

A programming language is a notation for writing programs, which are specifications of a computation or algorithm.[1] Some, but not all, authors restrict the term "programming language" to those languages that can express all possible algorithms.[1][2] Traits often considered important for what constitutes a programming language include:

Function and target: A computer programming language is a language[3] used to write computer programs, which involve a computer performing some kind of computation[4] or algorithm and possibly control external devices such as printers, disk drives, robots,[5] and so on. For example PostScript programs are frequently created by another program to control a computer printer or display. More generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language.[6] In most practical contexts, a programming language involves a computer; consequently programming languages are usually defined and studied this way.[7] Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.

Abstractions: Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language support adequate abstractions is expressed by the abstraction principle;[8] this principle is sometimes formulated as recommendation to the programmer to make proper use of such abstractions.[9]

Expressive power: The theory of computation classifies languages by the computations they are capable of expressing. All Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL and Charity are examples of languages that are not Turing complete, yet often called programming languages.[10][11]

Markup languages like XML, HTML or troff, which define structured data, are not generally considered programming languages.[12][13][14] Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete XML dialect.[15][16][17] Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset.[18][19]

The term computer language is sometimes used interchangeably with programming language.[20] However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages.[21] In this vein, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.[22] Another usage regards programming languages as theoretical constructs for programming abstract machines, and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[23] John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect

Page 55: Xampp All About

the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[24]

[edit] Elements

All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.

[edit] Syntax

Parse tree of Python code with inset tokenization

Page 56: Xampp All About

Syntax highlighting is often used to aid programmers in recognizing elements of source code. The language above is Python.

Main article: Syntax (programming languages)

A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.

The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax.

Programming language syntax is usually defined using a combination of regular expressions (for lexical structure) and Backus–Naur Form (for grammatical structure). Below is a simple grammar, based on Lisp:

expression ::= atom | listatom ::= number | symbolnumber ::= [+-]?['0'-'9']+symbol ::= ['A'-'Z''a'-'z'].*list ::= '(' expression* ')'

This grammar specifies the following:

an expression is either an atom or a list; an atom is either a number or a symbol;

a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign;

a symbol is a letter followed by zero or more of any characters (excluding whitespace); and

a list is a matched pair of parentheses, with zero or more expressions inside it.

The following are examples of well-formed token sequences in this grammar: '12345', '()', '(a b c232 (1))'

Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.

Page 57: Xampp All About

Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:

"Colorless green ideas sleep furiously." is grammatically well-formed but has no generally accepted meaning.

"John is a married bachelor." is grammatically well-formed but expresses a meaning that cannot be true.

The following C language fragment is syntactically correct, but performs operations that are not semantically defined (the operation *p >> 4 has no meaning for a value having a complex type and p->im is not defined because the value of p is the null pointer):

complex *p = NULL;complex abs_p = sqrt(*p >> 4 + p->im);

If the type declaration on the first line were omitted, the program would trigger an error on compilation, as the variable "p" would not be defined. But the program would still be syntactically correct, since type declarations provide only semantic information.

The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars.[25] Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis an undecidable problem, and generally blur the distinction between parsing and execution.[26] In contrast to Lisp's macro system and Perl's BEGIN blocks, which may contain general computations, C macros are merely string replacements, and do not require code execution.[27]

[edit] SemanticsFurther information: Semantics#Computer_science

The term semantics refers to the meaning of languages, as opposed to their form (syntax).

[edit] Static semantics

The static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[1] For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that every identifier is declared before it is used (in languages that require such declarations) or that the labels on the arms of a case statement are distinct.[28] Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding an integer to a function name), or that subroutine calls have the appropriate number and type of arguments, can be enforced by defining them as rules in a logic called a type system. Other forms of static analyses like data flow analysis may also be part of static semantics. Newer programming languages like Java and C# have definite assignment analysis, a form of data flow analysis, as part of their static semantics.

Page 58: Xampp All About

[edit] Dynamic semantics

Main article: Semantics of programming languages

Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements. The dynamic semantics (also known as execution semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research went into formal semantics of programming languages, which allow execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.

[edit] Type system

Main articles: Type system and Type safety

A type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. The goal of a type system is to verify and usually enforce a certain level of correctness in programs written in that language by detecting certain incorrect operations. Any decidable type system involves a trade-off: while it rejects many incorrect programs, it can also prohibit some correct, albeit unusual programs. In order to bypass this downside, a number of languages have type loopholes, usually unchecked casts that may be used by the programmer to explicitly allow a normally disallowed operation between different types. In most typed languages, the type system is used only to type check programs, but a number of languages, usually functional ones, infer types, relieving the programmer from the need to write type annotations. The formal design and study of type systems is known as type theory.

[edit] Typed versus untyped languages

A language is typed if the specification of every operation defines types of data to which the operation is applicable, with the implication that it is not applicable to other types.[29] For example, the data represented by "this text between the quotes" is a string. In most programming languages, dividing a number by a string has no meaning. Most modern programming languages will therefore reject any program attempting to perform such an operation. In some languages, the meaningless operation will be detected when the program is compiled ("static" type checking), and rejected by the compiler, while in others, it will be detected when the program is run ("dynamic" type checking), resulting in a runtime exception.

A special case of typed languages are the single-type languages. These are often scripting or markup languages, such as REXX or SGML, and have only one data type—most commonly character strings which are used for both symbolic and numeric data.

Page 59: Xampp All About

In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, which are generally considered to be sequences of bits of various lengths.[29] High-level languages which are untyped include BCPL and some varieties of Forth.

In practice, while few languages are considered typed from the point of view of type theory (verifying or rejecting all operations), most modern languages offer a degree of typing.[29] Many production languages provide means to bypass or subvert the type system.

[edit] Static versus dynamic typing

In static typing, all expressions have their types determined prior to when the program is executed, typically at compile-time. For example, 1 and (2+2) are integer expressions; they cannot be passed to a function that expects a string, or stored in a variable that is defined to hold dates.[29]

Statically typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must explicitly write types at certain textual positions (for example, at variable declarations). In the second case, the compiler infers the types of expressions and declarations based on context. Most mainstream statically typed languages, such as C++, C# and Java, are manifestly typed. Complete type inference has traditionally been associated with less mainstream languages, such as Haskell and ML. However, many manifestly typed languages support partial type inference; for example, Java and C# both infer types in certain limited cases.[30]

Dynamic typing, also called latent typing, determines the type-safety of operations at runtime; in other words, types are associated with runtime values rather than textual expressions.[29] As with type-inferred languages, dynamically typed languages do not require the programmer to write explicit type annotations on expressions. Among other things, this may permit a single variable to refer to values of different types at different points in the program execution. However, type errors cannot be automatically detected until a piece of code is actually executed, potentially making debugging more difficult. Ruby, Lisp, JavaScript, and Python are dynamically typed.

[edit] Weak and strong typing

Weak typing allows a value of one type to be treated as another, for example treating a string as a number.[29] This can occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even at run-time.

Strong typing prevents the above. An attempt to perform an operation on the wrong type of value raises an error.[29] Strongly typed languages are often termed type-safe or safe.

An alternative definition for "weakly typed" refers to languages, such as Perl and JavaScript, which permit a large number of implicit type conversions. In JavaScript, for example, the expression 2 * x implicitly converts x to a number, and this conversion succeeds even if x is null, undefined, an Array, or a string of letters. Such implicit conversions are often useful, but they can mask programming errors. Strong and static are now generally considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean strongly,

Page 60: Xampp All About

statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both strongly typed and weakly, statically typed.[31][32]

[edit] Standard library and run-time systemMain article: Standard library

Most programming languages have an associated core library (sometimes known as the 'standard library', especially if it is included as part of the published language standard), which is conventionally made available by all implementations of the language. Core libraries typically include definitions for commonly used algorithms, data structures, and mechanisms for input and output.

A language's core library is often treated as part of the language by its users, although the designers may have treated it as a separate entity. Many language specifications define a core that must be made available in all implementations, and in the case of standardized languages this core library may be required. The line between a language and its core library therefore differs from language to language. Indeed, some languages are designed so that the meanings of certain syntactic constructs cannot even be described without referring to the core library. For example, in Java, a string literal is defined as an instance of the java.lang.String class; similarly, in Smalltalk, an anonymous function expression (a "block") constructs an instance of the library's BlockContext class. Conversely, Scheme contains multiple coherent subsets that suffice to construct the rest of the language as library macros, and so the language designers do not even bother to say which portions of the language must be implemented as language constructs, and which must be implemented as parts of a library.

[edit] Design and implementation

Programming languages share properties with natural languages related to their purpose as vehicles for communication, having a syntactic form separate from its semantics, and showing language families of related languages branching one from another.[3] But as artificial constructs, they also differ in fundamental ways from languages that have evolved through usage. A significant difference is that a programming language can be fully described and studied in its entirety, since it has a precise and finite definition.[33] By contrast, natural languages have changing meanings given by their users in different communities. While constructed languages are also artificial languages designed from the ground up with a specific purpose, they lack the precise and complete semantic definition that a programming language has.

Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse. Although there have been attempts to design one "universal" programming language that serves all purposes, all of them have failed to be generally accepted as filling this role.[34] The need for diverse programming languages arises from the diversity of contexts in which languages are used:

Programs range from tiny scripts written by individual hobbyists to huge systems written by hundreds of programmers.

Page 61: Xampp All About

Programmers range in expertise from novices who need simplicity above all else, to experts who may be comfortable with considerable complexity.

Programs must balance speed, size, and simplicity on systems ranging from microcontrollers to supercomputers.

Programs may be written once and not change for generations, or they may undergo continual modification.

Finally, programmers may simply differ in their tastes: they may be accustomed to discussing problems and expressing them in a particular language.

One common trend in the development of programming languages has been to add more ability to solve problems using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying hardware of the computer. As new programming languages have developed, features have been added that let programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. This lets them write more functionality per time unit.[35]

Natural language processors have been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger W. Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and dismissed natural language programming as "foolish".[36] Alan Perlis was similarly dismissive of the idea.[37] Hybrid approaches have been taken in Structured English and SQL.

A language's designers and users must construct a number of artifacts that govern and enable the practice of programming. The most important of these artifacts are the language specification and implementation.

[edit] SpecificationMain article: Programming language specification

The specification of a programming language is intended to provide a definition that the language users and the implementors can use to determine whether the behavior of a program is correct, given its source code.

A programming language specification can take several forms, including the following:

An explicit definition of the syntax, static semantics, and execution semantics of the language. While syntax is commonly specified using a formal grammar, semantic definitions may be written in natural language (e.g., as in the C language), or a formal semantics (e.g., as in Standard ML [38] and Scheme [39] specifications).

A description of the behavior of a translator for the language (e.g., the C++ and Fortran specifications). The syntax and semantics of the language have to be inferred from this description, which may be written in natural or a formal language.

Page 62: Xampp All About

A reference or model implementation , sometimes written in the language being specified (e.g., Prolog or ANSI REXX [40] ). The syntax and semantics of the language are explicit in the behavior of the reference implementation.

[edit] ImplementationMain article: Programming language implementation

An implementation of a programming language provides a way to execute that program on one or more configurations of hardware and software. There are, broadly, two approaches to programming language implementation: compilation and interpretation. It is generally possible to implement a language using either technique.

The output of a compiler may be executed by hardware or a program called an interpreter. In some implementations that make use of the interpreter approach there is no distinct boundary between compiling and interpreting. For instance, some implementations of BASIC compile and then execute the source a line at a time.

Programs that are executed directly on the hardware usually run several orders of magnitude faster than those that are interpreted in software.[citation needed]

One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual machine, just before execution, translates the blocks of bytecode which are going to be used to machine code, for direct execution on the hardware.

[edit] Usage

Thousands of different programming languages have been created, mainly in the computing field.[41] Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness.

When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by using pseudocode, which interleaves natural language with code written in a programming language.

A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (called primitives).[42] Programming is the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment.

Page 63: Xampp All About

Programs for a computer might be executed in a batch process without human interaction, or a user might type commands in an interactive session of an interpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language is used to give commands to a software application (such as a shell) it is called a scripting language.[citation needed]

[edit] Measuring language usageMain article: Measuring programming language popularity

It is difficult to determine which programming languages are most widely used, and what usage means varies by context. One language may occupy the greater number of programmer hours, a different one have more lines of code, and a third utilize the most CPU time. Some languages are very popular for particular kinds of applications. For example, COBOL is still strong in the corporate data center, often on large mainframes; FORTRAN in scientific and engineering applications; and C in embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications.

Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:

counting the number of job advertisements that mention the language[43]

the number of books sold that teach or describe the language[44]

estimates of the number of existing lines of code written in the language—which may underestimate languages not often found in public searches[45]

counts of language references (i.e., to the name of the language) found using a web search engine.

Combining and averaging information from various internet sites, langpop.com claims that [46] in 2008 the 10 most cited programming languages are (in alphabetical order): C, C++, C#, Java, JavaScript, Perl, PHP, Python, Ruby, and SQL.

[edit] Taxonomies

For more details on this topic, see Categorical list of programming languages.

There is no overarching classification scheme for programming languages. A given programming language does not usually have a single ancestor language. Languages commonly arise by combining the elements of several predecessor languages with new ideas in circulation at the time. Ideas that originate in one language will diffuse throughout a family of related languages, and then leap suddenly across familial gaps to appear in an entirely different family.

The task is further complicated by the fact that languages can be classified along multiple axes. For example, Java is both an object-oriented language (because it encourages object-oriented

Page 64: Xampp All About

organization) and a concurrent language (because it contains built-in constructs for running multiple threads in parallel). Python is an object-oriented scripting language.

In broad strokes, programming languages divide into programming paradigms and a classification by intended domain of use. Traditionally, programming languages have been regarded as describing computation in terms of imperative sentences, i.e. issuing commands. These are generally called imperative programming languages. A great deal of research in programming languages has been aimed at blurring the distinction between a program as a set of instructions and a program as an assertion about the desired answer, which is the main feature of declarative programming.[47] More refined paradigms include procedural programming, object-oriented programming, functional programming, and logic programming; some languages are hybrids of paradigms or multi-paradigmatic. An assembly language is not so much a paradigm as a direct model of an underlying machine architecture. By purpose, programming languages might be considered general purpose, system programming languages, scripting languages, domain-specific languages, or concurrent/distributed languages (or a combination of these).[48] Some general purpose languages were designed largely with educational goals.[49]

A programming language may also be classified by factors unrelated to programming paradigm. For instance, most programming languages use English language keywords, while a minority do not. Other languages may be classified as being esoteric or not.

[edit] History

A selection of textbooks that teach programming, in languages both popular and obscure. These are only a few of the thousands of programming languages and dialects that have been designed in history.

Main articles: History of programming languages and Programming language generations

[edit] Early developments

The first programming languages predate the modern computer. The 19th century had "programmable" looms and player piano scrolls which implemented what are today recognized as examples of domain-specific languages. By the beginning of the twentieth century, punch cards encoded data and directed mechanical processing. In the 1930s and 1940s, the formalisms of Alonzo Church's lambda calculus and Alan Turing's Turing machines provided mathematical

Page 65: Xampp All About

abstractions for expressing algorithms; the lambda calculus remains influential in language design.[50]

In the 1940s, the first electrically powered digital computers were created. The first high-level programming language to be designed for a computer was Plankalkül, developed for the German Z3 by Konrad Zuse between 1943 and 1945. However, it was not implemented until 1998 and 2000.[51]

Programmers of early 1950s computers, notably UNIVAC I and IBM 701, used machine language programs, that is, the first generation language (1GL). 1GL programming was quickly superseded by similarly machine-specific, but mnemonic, second generation languages (2GL) known as assembly languages or "assembler". Later in the 1950s, assembly language programming, which had evolved to include the use of macro instructions, was followed by the development of "third generation" programming languages (3GL), such as FORTRAN, LISP, and COBOL.[52] 3GLs are more abstract and are "portable", or at least implemented similarly on computers that do not support the same native machine code. Updated versions of all of these 3GLs are still in general use, and each has strongly influenced the development of later languages.[53] At the end of the 1950s, the language formalized as ALGOL 60 was introduced, and most later programming languages are, in many respects, descendants of Algol.[53] The format and use of the early programming languages was heavily influenced by the constraints of the interface.[54]

[edit] Refinement

The period from the 1960s to the late 1970s brought the development of the major language paradigms now in use, though many aspects were refinements of ideas in the very first Third-generation programming languages:

APL introduced array programming and influenced functional programming.[55]

PL/I (NPL) was designed in the early 1960s to incorporate the best ideas from FORTRAN and COBOL.

In the 1960s, Simula was the first language designed to support object-oriented programming; in the mid-1970s, Smalltalk followed with the first "purely" object-oriented language.

C was developed between 1969 and 1973 as a system programming language, and remains popular.[56]

Prolog , designed in 1972, was the first logic programming language.

In 1978, ML built a polymorphic type system on top of Lisp, pioneering statically typed functional programming languages.

Each of these languages spawned an entire family of descendants, and most modern languages count at least one of them in their ancestry.

The 1960s and 1970s also saw considerable debate over the merits of structured programming, and whether programming languages should be designed to support it.[57] Edsger Dijkstra, in a

Page 66: Xampp All About

famous 1968 letter published in the Communications of the ACM, argued that GOTO statements should be eliminated from all "higher level" programming languages.[58]

The 1960s and 1970s also saw expansion of techniques that reduced the footprint of a program as well as improved productivity of the programmer and user. The card deck for an early 4GL was a lot smaller for the same functionality expressed in a 3GL deck.

[edit] Consolidation and growth

The 1980s were years of relative consolidation. C++ combined object-oriented and systems programming. The United States government standardized Ada, a systems programming language derived from Pascal and intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-called "fifth generation" languages that incorporated logic programming constructs.[59] The functional languages community moved to standardize ML and Lisp. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the previous decade.

One important trend in language design for programming large-scale systems during the 1980s was an increased focus on the use of modules, or large-scale organizational units of code. Modula-2, Ada, and ML all developed notable module systems in the 1980s, although other languages, such as PL/I, already had extensive support for modular programming. Module systems were often wedded to generic programming constructs.[60]

The rapid growth of the Internet in the mid-1990s created opportunities for new languages. Perl, originally a Unix scripting tool first released in 1987, became common in dynamic websites. Java came to be used for server-side programming, and bytecode virtual machines became popular again in commercial settings with their promise of "Write once, run anywhere" (UCSD Pascal had been popular for a time in the early 1980s). These developments were not fundamentally novel, rather they were refinements to existing languages and paradigms, and largely based on the C family of programming languages.

Programming language evolution continues, in both industry and research. Current directions include security and reliability verification, new kinds of modularity (mixins, delegates, aspects), and database integration such as Microsoft's LINQ.

The 4GLs are examples of languages which are domain-specific, such as SQL, which manipulates and returns sets of data rather than the scalar values which are canonical to most programming languages. Perl, for example, with its 'here document' can hold multiple 4GL programs, as well as multiple JavaScript programs, in part of its own perl code and use variable interpolation in the 'here document' to support multi-language programming.[61]

MySQLFrom Wikipedia, the free encyclopedia

Page 67: Xampp All About

MySQL

Developer(s) MySQL AB (A subsidiary of Oracle)

Initial release May 23, 1995

Stable release 5.5.15 (July 28, 2011; 12 days ago) [+/−]

Preview release 5.6.2 (April 11, 2011; 3 months ago) [+/−]

Written in C, C++

Operating system Cross-platform

Available in English

Type RDBMS

LicenseGNU General Public License (version 2, with linking exception) or proprietary EULA

Websitewww.mysql.comdev.mysql.com

MySQL (  / m a ɪ ̩ ɛ s k juː ̍ ɛ l / "My S-Q-L",[1] also commonly / m a ɪ ̍ s iː k w əl / "My Sequel") is a relational database management system (RDBMS)[2] that runs as a server providing multi-user access to a number of databases. It is named after developer Michael Widenius' daughter, My. The SQL phrase stands for Structured Query Language.[3]

The MySQL development project has made its source code available under the terms of the GNU General Public License, as well as under a variety of proprietary agreements. MySQL was owned and sponsored by a single for-profit firm, the Swedish company MySQL AB, now owned by Oracle Corporation.[4]

Free-software-open source projects that require a full-featured database management system often use MySQL. For commercial use, several paid editions are available, and offer additional functionality. Applications which use MySQL databases include: Joomla, WordPress, MyBB, phpBB, Drupal and other software built on the LAMP software stack. MySQL is also used in

Page 68: Xampp All About

many high-profile, large-scale World Wide Web products, including Wikipedia, Google [5] (though not for searches) and Facebook.[6]

Contents

[hide]

1 Uses 2 Platforms and interfaces

3 Management and graphical frontends

o 3.1 Official

o 3.2 Third-party

o 3.3 Command line

4 Deployment

5 Features

o 5.1 Distinguishing features

6 Product history

o 6.1 Future releases

7 Support and licensing

8 Corporate backing history

9 Forks

10 MySQL versions

11 See also

12 References

13 External links

[edit] Uses

MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP web application software stack—LAMP is an acronym for "Linux, Apache, MySQL, Perl/PHP/Python".

Page 69: Xampp All About

MySQL is used in some of the most frequently visited web sites on the Internet, including Flickr,[7] Nokia.com,[8] YouTube [9] and as previously mentioned, Wikipedia,[10] Google [11] and Facebook.[12][13]

[edit] Platforms and interfaces

MySQL is written in C and C++. Its SQL parser is written in yacc, and a home-brewed lexical analyzer named sql_lex.cc.[14]

MySQL works on many different system platforms, including AIX, BSDi, FreeBSD, HP-UX, eComStation, i5/OS, IRIX, Linux, Mac OS X, Microsoft Windows, NetBSD, Novell NetWare, OpenBSD, OpenSolaris, OS/2 Warp, QNX, Solaris, Symbian, SunOS, SCO OpenServer, SCO UnixWare, Sanos and Tru64. A port of MySQL to OpenVMS also exists.[15]

Many programming languages with language-specific APIs include libraries for accessing MySQL databases. These include MySQL Connector/Net for integration with Microsoft's Visual Studio (languages such as C# and VB are most commonly used) and the ODBC driver for Java. In addition, an ODBC interface called MyODBC allows additional programming languages that support the ODBC interface to communicate with a MySQL database, such as ASP or ColdFusion. The HTSQL - URL based query method also ships with a MySQL adapter, allowing direct interaction between a MySQL database and any web client via structured URLs. The MySQL server and official libraries are mostly implemented in ANSI C/ANSI C++.

[edit] Management and graphical frontends

MySQL Workbench in Windows, displaying the Home Screen which streamlines use of its full capabilities

MySQL is primarily an RDBMS and therefore ships with no GUI tools to administer MySQL databases or manage data contained within. Users may use the included command-line tools,[16] or download MySQL frontends from various parties that have developed desktop software and web applications to manage MySQL databases, build database structure, and work with data records.

Page 70: Xampp All About

[edit] Official

The official MySQL Workbench is a free integrated environment developed by MySQL AB, that enables users to graphically administer MySQL databases and visually design database structure. MySQL Workbench replaces the previous package of software, MySQL GUI Tools. Similar to other third-party packages, but still considered the authoritative MySQL frontend, MySQL Workbench lets users manage the following:

Database design & modeling SQL development – replacing MySQL Query Browser

Database administration – replacing MySQL Administrator

MySQL Workbench is available in two editions, the regular free and open source Community Edition which may be downloaded from the MySQL website, and the proprietary Standard Edition which extends and improves the feature set of the Community Edition.

[edit] Third-party

Third-party proprietary and free graphical administration applications (or "front ends") are available that integrate with MySQL and enable users to work with database structure and data visually. Some well-known front ends, in alphabetical order, are:

Adminer – a free MySQL front end written in one PHP script, capable of managing multiple databases, with many CSS skins available.

DBEdit – a free front end for MySQL and other databases.

dbForge GUI Tools — a set of tools for database management that includes separate applications for schema comparison and synchronization, data comparison and synchronization, and building queries.

HeidiSQL – a full featured free front end that runs on Windows, and can connect to local or remote MySQL servers to manage databases, tables, column structure, and individual data records. Also supports specialised GUI features for date/time fields and enumerated multiple-value fields.[17]

Navicat – a series of proprietary graphical database management applications, developed for Windows, Macintosh and Linux.

OpenOffice.org – OpenOffice.org Base can manage MySQL databases. (You must install all of the OpenOffice.org suite. It is free and open source.)

phpMyAdmin – a free Web-based front end widely installed by Web hosts worldwide, since it is developed in PHP and is included in the convenient LAMP stack, MAMP, and WAMP software bundle installers.

Page 71: Xampp All About

Other available proprietary MySQL front ends include dbForge Studio for MySQL, Epictetus, Oracle SQL Developer, SchemaBank, SQLyog, SQLPro SQL Client, TOAD, Toad Data Modeler,

[edit] Command line

MySQL ships with a suite of command-line tools for tasks such as querying the database, backing up data, inspecting status, performing common tasks such as creating a database, and many more. A variety of third-party command-line tools is also available, including Maatkit, which is written in Perl.

[edit] Deployment

MySQL can be built and installed manually from source code, but this can be tedious so it is more commonly installed from a binary package unless special customizations are required. On most Linux distributions the package management system can download and install MySQL with minimal effort, though further configuration is often required to adjust security and optimization settings.

Though MySQL began as a low-end alternative to more powerful proprietary databases, it has gradually evolved to support higher-scale needs as well. It is still most commonly used in small to medium scale single-server deployments, either as a component in a LAMP based web application or as a standalone database server. Much of MySQL's appeal originates in its relative simplicity and ease of use, which is enabled by an ecosystem of open source tools such as phpMyAdmin. In the medium range, MySQL can be scaled by deploying it on more powerful hardware, such as a multi-processor server with gigabytes of memory.

There are however limits to how far performance can scale on a single server, so on larger scales, multi-server MySQL deployments are required to provide improved performance and reliability. A typical high-end configuration can include a powerful master database which handles data write operations and is replicated to multiple slaves that handle all read operations.[18] The master server synchronizes continually with its slaves so in the event of failure a slave can be promoted to become the new master, minimizing downtime. Further improvements in performance can be achieved by caching the results from database queries in memory using memcached, or breaking down a database into smaller chunks called shards which can be spread across a number of distributed server clusters.[19]

[edit] Features

As of April 2009, MySQL offered MySQL 5.1 in two different variants: the open source MySQL Community Server and the commercial Enterprise Server. MySQL 5.5 is offered under the same licences.[20] They have a common code base and include the following features:

A broad subset of ANSI SQL 99, as well as extensions Cross-platform support

Page 72: Xampp All About

Stored procedures

Triggers

Cursors

Updatable Views

True Varchar support

Information schema

Strict mode[further explanation needed]

X/Open XA distributed transaction processing (DTP) support; two phase commit as part of this, using Oracle's InnoDB engine

Independent storage engines (MyISAM for read speed, InnoDB for transactions and referential integrity, MySQL Archive for storing historical data in little space)

Transactions with the InnoDB, BDB and Cluster storage engines; savepoints with InnoDB

SSL support

Query caching

Sub-SELECTs (i.e. nested SELECTs)

Replication support (i.e. Master-Master Replication & Master-Slave Replication) with one master per slave, many slaves per master, no automatic support for multiple masters per slave.

Full-text indexing and searching using MyISAM engine

Embedded database library

Partial Unicode support (UTF-8 and UCS-2 encoded strings are limited to the BMP)

Partial ACID compliance (full compliance only when using the non-default storage engines InnoDB, BDB and Cluster)

Partititoned tables with pruning of partitions in optimiser

Shared-nothing clustering through MySQL Cluster

Hot backup (via mysqlhotcopy) under certain conditions[21]

The developers release monthly versions of the MySQL Server. The sources can be obtained from MySQL's web site or from MySQL's Bazaar repository, both under the GPL license.

[edit] Distinguishing features

MySQL implements the following features, which some other RDBMS systems may not:

Page 73: Xampp All About

Multiple storage engines, allowing one to choose the one that is most effective for each table in the application (in MySQL 5.0, storage engines must be compiled in; in MySQL 5.1, storage engines can be dynamically loaded at run time):

o Native storage engines (MyISAM, Falcon, Merge, Memory (heap), Federated, Archive, CSV, Blackhole, Cluster, Berkeley DB, EXAMPLE, Maria, and InnoDB, which was made the default as of 5.5)

o Partner-developed storage engines (solidDB, NitroEDB, Infobright (formerly Brighthouse), Kickfire, XtraDB, IBM DB2 [22] ). InnoDB used to be a partner-developed storage engine, but with recent acquisitions, Oracle now owns both MySQL core and InnoDB.

o Community-developed storage engines (memcache engine, httpd, PBXT, Revision Engine)

o Custom storage engines

Commit grouping, gathering multiple transactions from multiple connections together to increase the number of commits per second.

[edit] Product history

Milestones in MySQL development include:

Original development of MySQL by Michael Widenius and David Axmark beginning in 1994[23]

First internal release on 23 May 1995

Windows version was released on 8 January 1998 for Windows 95 and NT

Version 3.23: beta from June 2000, production release January 2001

Version 4.0: beta from August 2002, production release March 2003 (unions)

Version 4.01: beta from August 2003, Jyoti adopts MySQL for database tracking

Version 4.1: beta from June 2004, production release October 2004 (R-trees and B-trees, subqueries, prepared statements)

Version 5.0: beta from March 2005, production release October 2005 (cursors, stored procedures, triggers, views, XA transactions)

The developer of the Federated Storage Engine states that "The Federated Storage Engine is a proof-of-concept storage engine",[24] but the main distributions of MySQL version 5.0 included it and turned it on by default. Documentation of some of the short-comings appears in "MySQL Federated Tables: The Missing Manual".[25]

Sun Microsystems acquired MySQL AB on 26 February 2008.[4]

Version 5.1: production release 27 November 2008 (event scheduler, partitioning, plugin API, row-based replication, server log tables)

Page 74: Xampp All About

Version 5.1 contained 20 known crashing and wrong result bugs in addition to the 35 present in version 5.0 (almost all fixed as of release 5.1.51).[26]

MySQL 5.1 and 6.0 showed poor performance when used for data warehousing — partly due to its inability to utilize multiple CPU cores for processing a single query.[27]

Oracle acquired Sun Microsystems on 27 January 2010.[28]

MySQL Server 5.5 is currently generally available (as of December 2010). Enhancements and features include:

o The default storage engine is InnoDB, which supports transactions and referential integrity constraints.

o Improved InnoDB I/O subsystem [29]

o Improved SMP support [30]

o Semisynchronous replication.

o SIGNAL and RESIGNAL statement in compliance with the SQL standard.

o Support for supplementary Unicode character sets utf16, utf32, and utf8mb4.

o New options for user-defined partitioning.

[edit] Future releases

MySQL Server 6.0.11-alpha was announced 22 May 2009 as the last release of the 6.0 line. Future MySQL Server development uses a New Release Model. Features developed for 6.0 are being incorporated into future releases.

MySQL 5.6, a development milestone release, was announced at the MySQL users conference 2011. New features include performance improvements to the query optimizer, higher transactional throughput in InnoDB, new NoSQL-style memcached APIs, improvements to partitioning for querying and managing very large tables, improvements to replication and better performance monitoring by expanding the data available through the PERFORMANCE_SCHEMA.[31]

[edit] Support and licensing

MySQL offers support via their MySQL Enterprise product, including a 24/7 service with 30-minute response time. The support team has direct access to the developers as necessary to handle problems. In addition, it hosts forums and mailing lists, employees and other users are often available in several IRC channels providing assistance.

Buyers of MySQL Enterprise have access to binaries and software certified for their particular operating system, and access to monthly binary updates with the latest bug-fixes. Several levels of Enterprise membership are available, with varying response times and features ranging from

Page 75: Xampp All About

how to and emergency support through server performance tuning and system architecture advice. The MySQL Network Monitoring and Advisory Service monitoring tool for database servers is available only to MySQL Enterprise customers.

Potential users can install MySQL Server as free software under the GNU General Public License (GPL), and the MySQL Enterprise subscriptions include a GPL version of the server, with a traditional proprietary version available on request at no additional cost for cases where the intended use is incompatible with the GPL.[32]

Both the MySQL server software itself and the client libraries use dual-licensing distribution. Users may choose the GPL,[33] which MySQL has extended with a FLOSS License Exception. It allows Software licensed under other OSI-compliant open source licenses, which are not compatible to the GPL, to link against the MySQL client libraries.[34]

Customers that do not wish to follow the terms of the GPL may purchase a proprietary license.[35]

Like many open-source programs, MySQL has trademarked its name, which others may use only with the trademark holder's permission.[36]

[edit] Corporate backing history

In October 2005, Oracle Corporation acquired Innobase OY, the Finnish company that developed the third-party InnoDB storage engine that allows MySQL to provide such functionality as transactions and foreign keys. After the acquisition, an Oracle press release mentioned that the contracts that make the company's software available to MySQL AB would be due for renewal (and presumably renegotiation) some time in 2006.[37] During the MySQL Users Conference in April 2006, MySQL issued a press release that confirmed that MySQL and Innobase OY agreed to a "multi-year" extension of their licensing agreement.[38]

In February 2006, Oracle Corporation acquired Sleepycat Software,[39] makers of the Berkeley DB, a database engine providing the basis for another MySQL storage engine. This had little effect, as Berkeley DB was not widely used, and was deprecated (due to lack of use) in MySQL 5.1.12, a pre-GA release of MySQL 5.1 released in October 2006.[40]

In January 2008, Sun Microsystems bought MySQL for US$1 billion.[41]

In April 2009, Oracle Corporation entered into an agreement to purchase Sun Microsystems,[42] then owners of MySQL copyright and trademark. Sun's board of directors unanimously approved the deal, it was also approved by Sun's shareholders, and by the U.S. government on August 20, 2009.[43] On December 14, 2009, Oracle pledged to continue to enhance MySQL[44] as it had done for the previous four years. A movement against Oracle's acquisition of MySQL, to "Save MySQL"[45] from Oracle was started by one of the MySQL founders, Monty Widenius. The petition of 50,000+ developers and users called upon the European Commission to block approval of the acquisition. At the same time, several Free Software opinion leaders (including Eben Moglen, Pamela Jones of Groklaw, Jan Wildeboer and Carlo Piana, who also acted as co-counsel in the merger regulation procedure) advocated for the unconditional approval of the

Page 76: Xampp All About

merger. As part of the negotiations with the European Commission, Oracle committed that MySQL server will continue to use the dual-licensing strategy long used by MySQL AB with commercial and GPL versions available until at least 2015. The Oracle acquisition was eventually unconditionally approved by the European Commission on January 21, 2010.[46] Meanwhile, Monty Widenius has released a GPL only fork, MariaDB. MariaDB is based on the same code base as MySQL server and strives to maintain compatibility with Oracle provided versions.[47]

[edit] Forks

Drizzle – a fork targeted at the web-infrastructure and cloud computing markets. The developers of the product describe it as a "smaller, slimmer and (hopefully) faster version of MySQL". As such is planned to have many common MySQL features stripped out, including stored procedures, query cache, prepared statements, views, and triggers. This is a complete rewrite of the server that does not maintain compatibility with MySQL.

MariaDB – a community-developed branch of the MySQL database, the impetus being the community maintenance of its free status under GPL as opposed to any uncertainty of MySQL license status under its current ownership by Oracle. The intent also being to maintain high fidelity with MySQL, ensuring a "drop-in" replacement capability with library binary equivalency and exacting matching with MySQL APIs and commands. It includes the XtraDB storage engine as a replacement for InnoDB.

Percona Server – a fork that includes the XtraDB storage engine. It is an enhanced version of MySQL that is fully compatible, and deviates as little as possible from it, while still providing beneficial new features, better performance, and improved instrumentation for analysis of performance and usage.

OurDelta – is best characterized as a source of binaries compiled with various patches, including patches from MariaDB, Percona, and Google.

[edit] MySQL versions