Product SiteDocumentation Site

Red Hat Enterprise Linux 7

7.1 Release Notes

Release Notes for Red Hat Enterprise Linux 7

Red Hat Customer Content Services

Legal Notice

Copyright © 2015 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
All other trademarks are the property of their respective owners.


1801 Varsity Drive
RaleighNC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701

Abstract

The Release Notes documents the major features and enhancements implemented in Red Hat Enterprise Linux 7.1 and the known issues in this 7.1 release. For detailed information regarding the changes between Red Hat Enterprise Linux 6 and 7, consult the Migration Planning Guide.
Acknowledgements
Red Hat Global Support Services would like to recognize Sterling Alexander and Michael Everette for their outstanding contributions in testing Red Hat Enterprise Linux 7.
Preface
I. New Features
1. Architectures
2. Installation and Booting
3. Storage
4. File Systems
5. Kernel
6. Virtualization
7. Clustering
8. Compiler and Tools
9. Networking
10. Linux Containers with Docker Format
11. Authentication and Interoperability
12. Security
13. Desktop
14. Supportability and Maintenance
15. Red Hat Software Collections
II. Device Drivers
16. Storage Driver Updates
17. Network Driver Updates
18. Graphics Driver Updates
A. Revision History

Preface

Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security, and bug fix errata. The Red Hat Enterprise Linux 7.1 Release Notes documents the major changes, features, and enhancement introduced in the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release. In addition, the Red Hat Enterprise Linux 7.1 Release Notes documents the known issues in Red Hat Enterprise Linux 7.1.

Important

The online Red Hat Enterprise Linux 7.1 Release Notes, which are located online here, are to be considered the definitive, up-to-date version. Customers with questions about the release are advised to consult the online Release Notes for their version of Red Hat Enterprise Linux.

Known Issues

For Known Issue descriptions, refer to the English version of the Red Hat Enterprise Linux 7.1 Release Notes.
Should you require information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/.

Part I. New Features

Chapter 1. Architectures

Red Hat Enterprise Linux 7.1 is available as a single kit on the following architectures: [1]
  • 64-bit AMD
  • 64-bit Intel
  • IBM POWER7 and POWER8 (big endian)
  • IBM POWER8 (little endian) [2]
  • IBM System z [3]
In this release, Red Hat brings together improvements for servers and systems, as well as for the overall Red Hat open source experience.

1.1. Red Hat Enterprise Linux for POWER, Little Endian

Red Hat Enterprise Linux 7.1 introduces little endian support on IBM Power Systems servers using IBM POWER8 processors. Previously in Red Hat Enterprise Linux 7, only a big endian variant was offered for IBM Power Systems. Support for little endian on POWER8-based servers aims to improve portability of applications between 64-bit Intel compatible systems (x86_64) and IBM Power Systems.
  • Separate installation media are offered for installing Red Hat Enterprise Linux on IBM Power Systems servers in little endian mode. These media are available from the Download section of the Red Hat Customer Portal.
  • Only IBM POWER8 processor-based servers are supported with Red Hat Enterprise Linux for POWER, little endian.
  • Currently, Red Hat Enterprise Linux for POWER, little endian is only supported as a KVM guest under Red Hat Enteprise Virtualization for Power. Installation on bare metal hardware is currently not supported.
  • The GRUB2 boot loader is used on the installation media and for network boot. The Installation Guide has been updated with instructions for setting up a network boot server for IBM Power Systems clients using GRUB2.
  • All software packages for IBM Power Systems are available for both the little endian and the big endian variant of Red Hat Enterprise Linux for POWER.
  • Packages built for Red Hat Enterprise Linux for POWER, little endian use the the ppc64le architecture code - for example, gcc-4.8.3-9.ael7b.ppc64le.rpm.


[1] Note that the Red Hat Enterprise Linux 7.1 installation is only supported on 64-bit hardware. Red Hat Enterprise Linux 7.1 is able to run 32-bit operating systems, including previous versions of Red Hat Enterprise Linux, as virtual machines.
[2] Red Hat Enterprise Linux 7.1 (little endian) is currently only supported as a KVM guest under Red Hat Enteprise Virtualization for Power and PowerVM hypervisors.
[3] Note that Red Hat Enterprise Linux 7.1 supports IBM zEnterprise 196 hardware or later; IBM System z10 mainframe systems are no longer supported and will not boot Red Hat Enterprise Linux 7.1.

Chapter 2. Installation and Booting

2.1. Installer

The Red Hat Enterprise Linux installer, Anaconda, has been enhanced in order to improve the installation process for Red Hat Enterprise Linux 7.1.

Interface

  • The graphical installer interface now contains one additional screen which enables configuring the Kdump kernel crash dumping mechanism during the installation. Previously, this was configured after the installation using the firstboot utility, which was not accessible without a graphical interface. Now, you can configure Kdump as part of the installation process on systems without a graphical environment. The new screen is accessible from the main installer menu (Installation Summary).
    The new Kdump screen
    The new Kdump screen.

    Figure 2.1. The new Kdump screen


  • The manual partitioning screen has been redesigned to improve user experience. Some of the controls have been moved to different locations on the screen.
    The redesigned Manual Partitioning screen
    The new Manual Partitioning screen.

    Figure 2.2. The redesigned Manual Partitioning screen


  • You can now configure a network bridge in the Network & Hostname screen of the installer. To do so, click the + button at the bottom of the interface list, select Bridge from the menu, and configure the bridge in the Editing bridge connection dialog which appears afterwards. This dialog is provided by NetworkManager and is fully documented in the Red Hat Enterprise Linux 7.1 Networking Guide.
    Several new Kickstart options have also been added for bridge configuration. See below for details.
  • The installer no longer uses multiple consoles to display logs. Instead, all logs are in tmux panes in virtual console 1 (tty1). To access logs during the installation, press Ctrl+Alt+F1 to switch to tmux, and then use Ctrl+b X to switch between different windows (replace X with the number of a particular window as displayed at the bottom of the screen).
    To switch back to the graphical interface, press Ctrl+Alt+F6.
  • The command-line interface for Anaconda now includes full help. To view it, use the anaconda -h command on a system with the anaconda package installed. The command-line interface allows you to run the installer on an installed system, which is useful for disk image installations.

Kickstart Commands and Options

  • The logvol command has a new option: --profile=. Use this option to specify the configuration profile name to use with thin logical volumes. If used, the name will also be included in the metadata for the logical volume.
    By default, the available profiles are default and thin-performance and are defined in the /etc/lvm/profile directory. See the lvm(8) man page for additional information.
  • The --autoscreenshot option of the autostep Kickstart command has been fixed, and now correctly saves a screenshot of each screen into the /tmp/anaconda-screenshots directory upon exiting said screen. After the installation completes, these screenshots are moved into /root/anaconda-screenshots.
  • The liveimg command now supports installation from tar files as well as disk images. The tar archive must contain the installation media root file system, and the file name must end with .tar, .tbz, .tgz, .txz, .tar.bz2, .tar.gz, or .tar.xz.
  • Several new options have been added to the network command for configuring network bridges. These options are:
    • --bridgeslaves=: When this option is used, the network bridge with device name specified using the --device= option will be created and devices defined in the --bridgeslaves= option will be added to the bridge. For example:
      network --device=bridge0 --bridgeslaves=em1
    • --bridgeopts=: An optional comma-separated list of parameters for the bridged interface. Available values are stp, priority, forward-delay, hello-time, max-age, and ageing-time. For information about these parameters, see the nm-settings(5) man page.
  • The autopart command has a new option, --fstype. This option allows you to change the default file system type (xfs) when using automatic partitioning in a Kickstart file.
  • Several new features were added to Kickstart for better Docker support. These features include:
    • repo --install: This new option saves the provided repository configuration on the installed system in the /etc/yum.repos.d/ directory. Without using this option, a repository configured in a Kickstart file will only be available during the installation process, not on the installed system.
    • bootloader --disabled: This option will prevent the boot loader from being installed.
    • %packages --nocore: A new option for the %packages section of a Kickstart file which prevents the system from installing the @core package group. This enables installing extremely minimal systems for use with containers.
    Please note that the described options are only useful when combined with Docker containers, and using the options in a general-purpose installation could result in an unusable system.

Anaconda Entropy

  • In Red Hat Enterprise Linux 7.1, Anaconda gathers entropy if the disk is to be encrypted in order to prevent possible security issues, which could could be caused by creating an encrypted format for data with a low degree of entropy. Therefore, Anaconda waits until enough entropy is gathered when creating an encrypted format and suggests to the user how to reduce the waiting time.

Built-in Help in the Graphical Installer

Each screen in the installer's graphical interface and in the Initial Setup utility now has a Help button in the top right corner. Clicking this button opens the section of the Installation Guide relevant to the current screen using the Yelp help browser.

2.2. Boot Loader

Installation media for IBM Power Systems now use the GRUB2 boot loader instead of the previously offered yaboot. For the big endian variant of Red Hat Enterprise Linux for POWER, GRUB2 is preferred but yaboot can also be used. The newly introduced little endian variant requires GRUB2 to boot.
The Installation Guide has been updated with instructions for setting up a network boot server for IBM Power Systems using GRUB2.

Chapter 3. Storage

LVM Cache

As of Red Hat Enterprise Linux 7.1, LVM cache is fully supported. This feature allows users to create logical volumes with a small fast device performing as a cache to larger slower devices. Please refer to the lvm(8) manual page for information on creating cache logical volumes.
Note that the following restrictions on the use of cache logical volumes (LV):
  • The cache LV must be a top-level device. It cannot be used as a thin-pool LV, an image of a RAID LV, or any other sub-LV type.
  • The properties of the cache LV cannot be changed after creation. To change cache properties, remove the cache and recreate it with the desired properties.

Storage Array Management with libStorageMgmt API

With Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt, a storage array independent API, is fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface. Please note that the Targetd plug-in is not fully supported and remains a Technology Preview.
  • NetApp Filer (ontap 7-Mode)
  • Nexenta (nstor 3.1.x only)
  • SMI-S, for the following vendors:
    • HP 3PAR
      • OS release 3.2.1 or later
    • EMC VMAX and VNX
      • Solutions Enabler V7.6.2.48 or later
      • SMI-S Provider V4.6.2.18 hotfix kit or later
    • HDS VSP Array non-embedded provider
      • Hitachi Command Suite v8.0 or later
For more information on libStorageMgmt, refer to the relevant chapter in the Storage Administration Guide.

Support for LSI Syncro

Red Hat Enterprise Linux 7.1 includes code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter will be provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.1 are encouraged to provide feedback to Red Hat and LSI. For more information on LSI Syncro CS solutions, please visit http://www.lsi.com/products/shared-das/pages/default.aspx.

LVM Application Programming Interface

Red Hat Enterprise Linux 7.1 features the new LVM application programming interface (API) as a Technology Preview. This API is used to query and control certain aspects of LVM.
Refer to the lvm2app.h header file for more information.

DIF/DIX Support

DIF/DIX is a new addition to the SCSI Standard and a Technology Preview in Red Hat Enterprise Linux 7.1. DIF/DIX increases the size of the commonly used 512-byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receive, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA.
For more information, refer to the section Block Devices with DIF/DIX Enabled in the Storage Administration Guide.

Enhanced device-mapper-multipath Syntax Error Checking and Output

The device-mapper-multipath tool has been enhanced to verify the multipath.conf file more reliably. As a result, if multipath.conf contains any lines that cannot be parsed, device-mapper-multipath reports an error and ignores these lines to avoid incorrect parsing.
In addition, the following wildcard expressions have been added for the multipathd show paths format command:
  • %N and %n for the host and target Fibre Channel World Wide Node Names, respectively.
  • %R and %r for the host and target Fibre Channel World Wide Port Names, respectively.
Now, it is easier to associate multipaths with specific Fibre Channel hosts, targets, and their ports, which allows users to manage their storage configuration more effectively.

Chapter 4. File Systems

Support of Btrfs File System

The Btrfs (B-Tree) file system is supported as a Technology Preview in Red Hat Enterprise Linux 7.1. This file system offers advanced management, reliability, and scalability features. It enables users to create snapshots, it enables compression and integrated device management.

Support of Parallel NFS

Parallel NFS (pNFS) is a part of the NFS v4.1 standard that allows clients to access storage devices directly and in parallel. The pNFS architecture can improve the scalability and performance of NFS servers for several common workloads.
pNFS defines three different storage protocols or layouts: files, objects, and blocks. The Red Hat Enterprise Linux 7.1 client fully supports the files layout, and the blocks and object layouts are supported as a Technology Preview.
Red Hat continues to work with partners and open source projects to qualify new pNFS layout types and to provide full support for more layout types in the future.
For more information on pNFS, refer to http://www.pnfs.com/.

Chapter 5. Kernel

Support for Ceph Block Devices

The libceph.ko and rbd.ko modules have been added to the Red Hat Enterprise Linux 7.1 kernel. These RBD kernel modules allow a Linux host to see a Ceph block device as a regular disk device entry which can be mounted to a directory and formatted with a standard file system, such as XFS or ext4.
Note that the CephFS module, ceph.ko, is currently not supported in Red Hat Enterprise Linux 7.1.

Concurrent Flash MCL Updates

Microcode level upgrades (MCL) are enabled in Red Hat Enterprise Linux 7.1 on the IBM System z architecture. These upgrades can be applied without impacting I/O operations to the flash storage media and notify users of the changed flash hardware service level.

Dynamic kernel Patching

Red Hat Enterprise Linux 7.1 introduces kpatch, a dynamic "kernel patching utility", as a Technology Preview. The kpatch utility allows users to manage a collection of binary kernel patches which can be used to dynamically patch the kernel without rebooting. Note that kpatch is supported to run on AMD64 and Intel 64 architectures only.

Crashkernel with More than 1 CPU

Red Hat Enterprise Linux 7.1 enables booting crashkernel with more than one CPU. This function is supported as a Technology Preview.

dm-era Target

Red Hat Enterprise Linux 7.1 introduces the dm-era device-mapper target as a Technology Preview. dm-era keeps track of which blocks were written within a user-defined period of time called an "era". Each era target instance maintains the current era as a monotonically increasing 32-bit counter. This target enables backup software to track which blocks have changed since the last backup. It also enables partial invalidation of the contents of a cache to restore cache coherency after rolling back to a vendor snapshot. The dm-era target is primarily expected to be paired with the dm-cache target.

Cisco VIC kernel Driver

The Cisco VIC Infiniband kernel driver has been added to Red Hat Enterprise Linux 7.1 as a technology preview. This driver allows the use of Remote Directory Memory Access (RDMA)-like semantics on proprietary Cisco architectures.

Enhanced Entropy Management in hwrng

The paravirtualized hardware RNG (hwrng) support for Linux guests via virtio-rng has been enhanced in Red Hat Enterprise Linux 7.1. Previously, the rngd daemon needed to be started inside the guest and directed to the guest kernel's entropy pool. Starting with the Red Hat Enterprise Linux 7.1, the manual step has been removed. A new khwrngd thread fetches entropy from the virtio-rng device if the guest entropy falls below a specific level. Making this process transparent helps all Red Hat Enterprise Linux guests in utilizing the improved security benefits of having the paravirtualized hardware RNG provided by KVM hosts.

Scheduler Load-Balancing Performance Improvement

Previously, the scheduler load-balancing code balanced for all idle CPUs. In Red Hat Enterprise Linux 7.1, idle load balancing on behalf of an idle CPU is done only when the CPU is due for load balancing. This new behavior reduces the load-balancing rate on non-idle CPUs and therefore the amount of unnecessary work done by the scheduler, which improves its performance.

Improved newidle Balance in Scheduler

The behavior of the scheduler has been modified to stop searching for tasks in the newidle balance code if there are runnable tasks, which leads to better performance.

HugeTLB Supports Per-Node 1GB Huge Page Allocation

Red Hat Enterprise Linux 7.1 has added support for gigantic page allocation at runtime, which allows the user of 1GB hugetlbfs to specify which Non-Uniform Memory Access (NUMA) Node the 1GB should be allocated on during runtime.

New MCS-based Locking Mechanism

Red Hat Enterprise Linux 7.1 introduces a new locking mechanism, MCS locks. This new locking mechanism significantly reduces spinlock overhead in large systems, which makes spinlocks generally more efficient in Red Hat Enterprise Linux 7.1.

Process Stack Size Increased from 8KB to 16KB

Starting with Red Hat Enterprise Linux 7.1, the kernel process stack size has been increased from 8KB to 16KB to help large processes that use stack space.

uprobe and uretprobe Features Enabled in perf and systemtap

With Red Hat Enterprise Linux 7.1, the uprobe and uretprobe features work correctly with the perf command and the systemtap script.

End-To-End Data Consistency Checking

End-To-End data consistency checking on IBM System z is fully supported in Red Hat Enterprise Linux 7.1. This enhances data integrity and prevents data corruption as well as data loss more effectively.

DRBG on 32-Bit Systems

With Red Hat Enterprise Linux 7.1, the deterministic random bit generator (DRBG) has been updated to work on 32-bit systems.

Support for Large Crashkernel Sizes

The Kdump kernel crash dumping mechanism on systems with large (more than 4TB) memory has become fully supported in Red Hat Enterprise Linux 7.1.

Chapter 6. Virtualization

Increased Maximum Number of vCPUs in KVM

The maximum number of supported virtual CPUs (vCPUs) in a KVM guest has been increased to 240. This increases the amount of virtual processing units that a user can assign to the guest, and therefore improves its performance potential.

5th Generation Intel Core New Instructions Support in QEMU, KVM, and libvirt API

With Red Hat Enterprise Linux 7.1, the support for 5th Generation Intel Core processors has been added to the QEMU hypervisor, the KVM kernel code, and the libvirt API. This allows KVM guests to use the following instructions and features: ADCX, ADOX, RDSFEED, PREFETCHW, and supervisor mode access prevention (SMAP).

USB 3.0 Support for KVM Guests

Red Hat Enterprise Linux 7.1 features improved USB support by adding USB 3.0 host adapter (xHCI) emulation as a Technology Preview.

Compression for the dump-guest-memory Command

With Red Hat Enterprise Linux 7.1, the dump-guest-memory command supports crash dump compression. This makes it possible for users who cannot use the virsh dump command to require less hard drive space for guest crash dumps. In addition, saving a compressed guest crash dump frequently takes less time than saving a non-compressed one.

Open Virtual Machine Firmware

The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7.1. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests.

Improve Network Performance on Hyper-V

Several new features of the Hyper-V network driver are supported to improve network performance. For example, Receive-Side Scaling, Large Send Offload, Scatter/Gather I/O are now supported, and network throughput is increased.

hypervfcopyd in hyperv-daemons

The hypervfcopyd daemon has been added to the hyperv-daemons packages. hypervfcopyd is an implementation of file copy service functionality for Linux Guest running on Hyper-V 2012 R2 host. It enables the host to copy a file (over VMBUS) into the Linux Guest.

New Features in libguestfs

Red Hat Enterprise Linux 7.1 introduces a number of new features in libguestfs, a set of tools for accessing and modifying virtual machine disk images.
New Tools
  • virt-builder — a new tool for building virtual machine images. Use virt-builder to rapidly and securely create guests and customize them.
  • virt-customize — a new tool for customizing virtual machine disk images. Use virt-customize to install packages, edit configuration files, run scripts, and set passwords.
  • virt-diff — a new tool for showing differences between the file systems of two virtual machines. Use virt-diff to easily discover what files have been changed between snapshots.
  • virt-log — a new tool for listing log files from guests. The virt-log tool supports a variety of guests including Linux traditional, Linux using journal, and Windows event log.
  • virt-v2v — a new tool for converting guests from a foreign hypervisor to run on KVM, managed by libvirt, OpenStack, oVirt, Red Hat Enterprise Virtualization (RHEV), and several other targets. Currently, virt-v2v can convert Red Hat Enterprise Linux and Windows guests running on Xen and VMware ESX.

Improved Block I/O Performance Using virtio-blk-data-plane

The virtio-blk-data-plane I/O virtualization functionality has become fully supported in Red Hat Enterprise Linux 7.1. This functionality extends QEMU to perform disk I/O in a dedicated thread that is optimized for I/O performance.

Flight Recorder Tracing

SystemTap-based tracing has been introduced in Red Hat Enterprise Linux 7.1. SystemTap-based tracing allows users to automatically capture qemu-kvm data as long as the guest machine is running. This provides an additional avenue for investigating qemu-kvm problems, more flexible than qemu-kvm core dumps.
For detailed instructions on how to configure and use flight recorder tracing, see the Virtualization Deployment and Administration Guide.

NUMA Node Memory Allocation Control

The <memnode> has been added for the <numatune> setting in the domain XML configuration of libvirt. This enables users to control memory restrictions for each Non-Uniform Memory Access (NUMA) node of the guest operating system, which allows performance optimization for qemu-kvm.

Chapter 7. Clustering

Dynamic Token Timeout for Corosync

The token_coefficient option has been added to the Corosync Cluster Engine. The value of token_coefficient is used only when the nodelist section is specified and contains at least three nodes. In such a situation, the token timeout is computed as follows:
[token + (amount of nodes - 2)] * token_coefficient
This allows the cluster to scale without manually changing the token timeout every time a new node is added. The default value is 650 milliseconds, but it can be set to 0, resulting in effective removal of this feature.
This feature allows Corosync to handle dynamic addition and removal of nodes.

Corosync Tie Breaker Enhancement

The auto_tie_breaker quorum feature of Corosync has been enhanced to provide options for more flexible configuration and modification of tie breaker nodes. Users can now select a list of nodes that will retain a quorum in case of an even cluster split, or choose that a quorum will be retained by the node with the lowest node ID or the highest node ID.

Enhancements for Red Hat High Availability

For the Red Hat Enterprise Linux 7.1 release, the Red Hat High Availability Add-On supports the following features. For information on these features, see the High Availability Add-On Reference manual.
  • The pcs resource cleanup command can now reset the resource status and failcount for all resources.
  • You can specify a lifetime parameter for the pcs resource move command to indicate a period of time that the resource constraint this command creates will remain in effect.
  • You can use the pcs acl command to set permissions for local users to allow read-only or read-write access to the cluster configuration by using access control lists (ACLs).
  • The pcs constraint command now supports the configuration of specific constraint options in addition to general resource options.
  • The pcs resource create command supports the disabled parameter to indicate that the resource being created is not started automatically.
  • The pcs cluster quorum unblock command prevents the cluster from waiting for all nodes when establishing a quorum.
  • You can configure resource group order with the before and after parameters of the pcs resource create command.
  • You can back up the cluster configuration in a tarball and restore the cluster configuration files on all nodes from backup with the backup and restore options of the pcs config command.

Chapter 8. Compiler and Tools

Hot-patching Support for Linux on System z Binaries

GNU Compiler Collection (GCC) implements support for on-line patching of multi-threaded code for Linux on System z binaries. Selecting specific functions for hot-patching is enabled by using a "function attribute" and hot-patching for all functions can be enabled using the -mhotpatch command-line option.
Enabling hot-patching has a negative impact on software size and performance. It is therefore recommended to use hot-patching for specific functions instead of enabling hot patch support for all functions.
Hot-patching support for Linux on System z binaries was a Technology Preview for Red Hat Enterprise Linux 7.0. With the release of Red Hat Enterprise Linux 7.1, it is now fully supported.

Performance Application Programming Interface Enhancement

Red Hat Enterprise Linux 7 includes the Performance Application Programming Interface (PAPI). PAPI is a specification for cross-platform interfaces to hardware performance counters on modern microprocessors. These counters exist as a small set of registers that count events, which are occurrences of specific signals related to a processor's function. Monitoring these events has a variety of uses in application performance analysis and tuning.
In Red Hat Enterprise Linux 7.1 PAPI and the related libpfm libraries have been enhanced to provide support for IBM Power 8, Applied Micro X-Gene, ARM Cortex A57, and ARM Cortex A53 processors. In addition, the events sets have been updated for Intel Haswell, Ivy Bridge, and Sandy Bridge processors.

OProfile

OProfile is a system-wide profiler for Linux systems. The profiling runs transparently in the background and profile data can be collected at any time. In Red Hat Enterprise Linux 7.1, OProfile has been enhanced to provide support for the following processor families: Intel Atom Processor C2XXX, 5th Generation Intel Core Processors, IBM Power8, AppliedMicro X-Gene, and ARM Cortex A57.

OpenJDK8

As a Technology Preview, Red Hat Enterprise Linux 7.1 features the java-1.8.0-openjdk packages, which contain the latest version of the Open Java Development Kit (OpenJDK), OpenJDK8. These packages provide a fully compliant implementation of Java SE 8 and may be used in parallel with the existing java-1.7.0-openjdk packages, which remain available in Red Hat Enterprise Linux 7.1.
Java 8 brings numerous new improvements, such as Lambda expressions, default methods, a new Stream API for collections, JDBC 4.2, hardware AES support, and much more. In addition to these, OpenJDK8 contains numerous other performance updates and bug fixes.

sosreport Replaces snap

The deprecated snap tool has been removed from the powerpc-utils package. Its functionality has been integrated into the sosreport tool.

GDB Support for Little-Endian 64-bit PowerPC

Red Hat Enterprise Linux 7.1 implements support for the 64-bit PowerPC little-endian architecture in the GNU Debugger (GDB).

Tuna Enhancement

Tuna is a tool that can be used to adjust scheduler tunables, such as scheduler policy, RT priority, and CPU affinity. With Red Hat Enterprise Linux 7.1, the Tuna GUI has been enhanced to request root authorization when launched, so that the user does not have to run the desktop as root to invoke the Tuna GUI. For further information on Tuna, see the Tuna User Guide.

Chapter 9. Networking

Trusted Network Connect

Red Hat Enterprise Linux 7.1 introduces the Trusted Network Connect functionality as a Technology Preview. Trusted Network Connect is used with existing network access control (NAC) solutions, such as TLS, 802.1X, or IPsec to integrate endpoint posture assessment; that is, collecting an endpoint's system information (such as operating system configuration settings, installed packages, and others, termed as integrity measurements). Trusted Network Connect is used to verify these measurements against network access policies before allowing the endpoint to access the network.

SR-IOV Functionality in the qlcnic Driver

Support for Single-Root I/O virtualization (SR-IOV) has been added to the qlcnic driver as a Technology Preview. Support for this functionality will be provided directly by QLogic, and customers are encouraged to provide feedback to QLogic and Red Hat. Other functionality in the qlcnic driver remains fully supported.

Berkeley Packet Filter

Support for a Berkeley Packet Filter (BPF) based traffic classifier has been added to Red Hat Enterprise Linux 7.1. BPF is used in packet filtering for packet sockets, for sand-boxing in secure computing mode (seccomp), and in Netfilter. BPF has a just-in-time implementation for the most important architectures and has a rich syntax for building filters.

Improved Clock Stability

Previously, test results indicated that disabling the tickless kernel capability could significantly improve the stability of the system clock. The kernel tickless mode can be disabled by adding nohz=off to the kernel boot option parameters. However, recent improvements applied to the kernel in Red Hat Enterprise Linux 7.1 have greatly improved the stability of the system clock and the difference in stability of the clock with and without nohz=off should be much smaller now for most users. This is useful for time synchronization applications using PTP and NTP.

libnetfilter_queue Packages

The libnetfilter_queue package has been added to Red Hat Enterprise Linux 7.1. libnetfilter_queue is a user space library providing an API to packets that have been queued by the kernel packet filter. It enables receiving queued packets from the kernel nfnetlink_queue subsystem, parsing of the packets, rewriting packet headers, and re-injecting altered packets.

Teaming Enhancements

The libteam package has been updated to version 1.14-1 in Red Hat Enterprise Linux 7.1. It provides a number of bug fixes and enhancements, in particular, teamd can now be automatically re-spawned by systemd, which increases overall reliability.

Intel QuickAssist Technology Driver

Intel QuickAssist Technology (QAT) driver has been added to Red Hat Enterprise Linux 7.1. The QAT driver enables QuickAssist hardware which adds hardware offload crypto capabilities to a system.

LinuxPTP timemaster Support for Failover between PTP and NTP

The linuxptp package has been updated to version 1.4 in Red Hat Enterprise Linux 7.1. It provides a number of bug fixes and enhancements, in particular, support for failover between PTP domains and NTP sources using the timemaster application. When there are multiple PTP domains available on the network, or fallback to NTP is needed, the timemaster program can be used to synchronize the system clock to all available time sources.

Network initscripts

Support for custom VLAN names has been added in Red Hat Enterprise Linux 7.1. Improved support for IPv6 in GRE tunnels has been added; the inner address now persists across reboots.

TCP Delayed ACK

Support for a configurable TCP Delayed ACK has been added to the iproute package in Red Hat Enterprise Linux 7.1. This can be enabled by the ip route quickack command.

NetworkManager

Bonding option lacp_rate is now supported in Red Hat Enterprise Linux 7.1. NetworkManager has been enhanced to provide easy device renaming when renaming master interfaces with slave interfaces.
In addition, a priority setting has been added to the auto-connect function of NetworkManager. If more than one eligible candidate is now available for auto-connect, NetworkManager selects the connection with the highest priority. If all available connections have equal priority values, NetworkManager uses the default behavior and selects the last active connection.

Network Namespaces and VTI

Support for virtual tunnel interfaces (VTI) with network namespaces has been added in Red Hat Enterprise Linux 7.1. This enables traffic from a VTI to be passed between different namespaces when packets are encapsulated or de-encapsulated.

Alternative Configuration Storage for the MemberOf Plug-In

The configuration of the MemberOf plug-in for the 389 Directory Server can now be stored in a suffix mapped to a back-end database. This allows the MemberOf plug-in configuration to be replicated, which makes it easier for the user to maintain a consistent MemberOf plug-in configuration in a replicated environment.

Chapter 10. Linux Containers with Docker Format

Docker is an open source project that automates the deployment of applications inside Linux Containers, and provides the capability to package an application with its runtime dependencies into a container. It provides a Docker CLI command line tool for the lifecycle management of image-based containers. Linux containers enable rapid application deployment, simpler testing, maintenance, and troubleshooting while improving security. Using Red Hat Enterprise Linux 7 with Docker allows customers to increase staff efficiency, deploy third-party applications faster, enable a more agile development environment, and manage resources more tightly.
To quickly get up-and-running with Docker Containers, refer to Get Started with Docker Containers.
Red Hat Enterprise Linux 7.1 ships with Docker 1.3.2, which includes a number of new features.
  • Digital Signature Verification has been implemented in Docker as a Tech Preview feature. The Docker Engine will now automatically verify the provenance and integrity of all Official Repos using digital signatures.
  • The docker exec command enables processes to be spawned inside a Docker container using the Docker API.
  • The docker create command creates a container but does not spawn a process in it. This improves the management of the containers' life cycles.
Red Hat provides Docker base images for building applications on both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7.
Red Hat is also providing Kubernetes for use in orchestrating containers. For more information about Kubernetes see Get Started Orchestrating Docker Containers with Kubernetes.
Linux containers with Docker format are supported running on hosts with SELinux enabled. SELinux is not supported when the /var/lib/docker/ directory is located on a volume using the B-tree file system (Btrfs).

10.1. Components of Docker Containers

Docker works with the following fundamental components:
  • Container – an application sandbox. Each container is based on an image that holds necessary configuration data. When you launch a container from an image, a writable layer is added on top of this image. Every time you commit a container (using the docker commit command), a new image layer is added to store your changes.
  • Image – a static snapshot of the containers' configuration. Image is a read-only layer that is never modified, all changes are made in the top-most writable layer, and can be saved only by creating a new image. Each image depends on one or more parent images.
  • Platform Image – an image that has no parent. Platform images define the runtime environment, packages, and utilities necessary for a containerized application to run. The platform image is read-only, so any changes are reflected in the copied images stacked on top of it. See an example of such stacking in Figure 10.1, “Image Layering Using Docker Format”.
  • Registry – a repository of images. Registries are public or private repositories that contain images available for download. Some registries allow users to upload images to make them available to others.
  • Dockerfile – a configuration file with build instructions for Docker images. Dockerfiles provide a way to automate, reuse, and share build procedures.
Image Layering Using Docker Format
A scheme depicting image layers used in Docker.

Figure 10.1. Image Layering Using Docker Format


10.2. Advantages of Using Docker

Docker brings in an API for container management, an image format, and the possibility to use a remote registry for sharing containers. This scheme benefits both developers and system administrators with advantages such as:
  • Rapid application deployment – containers include the minimal runtime requirements of the application, reducing their size and allowing them to be deployed quickly.
  • Portability across machines – an application and all its dependencies can be bundled into a single container that is independent from the host version of the Linux kernel, platform distribution, or deployment model. This container can be transfered to another machine that runs Docker, and executed there without compatibility issues.
  • Version control and component reuse – you can track successive versions of a container, inspect differences, or roll-back to previous versions. Containers reuse components from the preceding layers, which makes them noticeably lightweight.
  • Sharing – you can use a remote repository to share your container with others. Red Hat provides a registry for this purpose, and it is also possible to configure your own private repository.
  • Lightweight footprint and minimal overhead – Docker images are typically very small, which facilitates rapid delivery and reduces the time to deploy new application containers.
  • Simplified maintenance – Docker reduces effort and risk of problems with application dependencies.

10.3. Comparison with Virtual Machines

Virtual machines represent an entire server with all of the associated software and maintenance concerns. Docker containers provide application isolation and can be configured with minimum run-time environments. In a Docker container, the kernel and parts of the operating system infrastructure are shared. For the virtual machine, a full operating system must be included.
  • You can create or destroy containers quickly and easily. Virtual Machines require full installations and require more computing resources to execute.
  • Containers are lightweight, therefore, more containers than virtual machines can run simultaneously on a host machine.
  • Containers share resources efficiently. Virtual machines are isolated. Therefore multiple variations of an application running in containers are also able to be very lightweight. For example, shared binaries are not duplicated on the system.
  • Virtual machines can be migrated while still executing, however containers cannot be migrated while executing and must be stopped before moving from host machine to host machine.
Containers do not replace virtual machines for all use cases. Careful evaluation is still required to determine what is best for your application.
To quickly get up-and-running with Docker Containers, refer to Get Started with Docker Containers.
The Docker FAQ contains more information about Linux Containers, Docker, subscriptions and support.

10.4. Using Docker on Red Hat Enterprise Linux 7.1

Docker, Kubernetes, and Docker Registry have been released as part of the Extras channel in Red Hat Enterprise Linux. Once the Extras channel has been enabled, the packages can be installed in the usual way. For more information on installing packages or enabling channels, see the System Administrator's Guide.
Red Hat provides a registry of certified docker images. This registry provides base images for building applications on both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 and pre-built solutions usable on Red Hat Enterprise Linux 7.1 with Docker. For more information about the registry and a list of available packages, see Docker Images.

Chapter 11. Authentication and Interoperability

Manual backup and restore Functionality

This update introduces the ipa-backup and ipa-restore commands to Identity Management (IdM), which allow users to manually back up their IdM data and restore them in case of a hardware failure. For further information, see the ipa-backup(1) and ipa-restore(1) manual pages or the related FreeIPA documentation.

Certificate Authority Management Tool

The ipa-cacert-manage renew command has been added to the Identity management (IdM) client, which makes it possible to renew the IdM Certification Authority (CA) file. This enables users to smoothly install and set up IdM using a certificate signed by an external CA. For details on this feature, see the ipa-cacert-manage(1) manual page or the related FreeIPA documentation.

Increased Access Control Granularity

It is now possible to regulate read permissions of specific sections in the Identity Management (IdM) server UI. This allows IdM server administrators to limit the accessibility of privileged content only to chosen users. In addition, authenticated users of the IdM server no longer have read permissions to all of its contents by default. These changes improve the overall security of the IdM server data. For further details, see the related FreeIPA documentation.

Limited Domain Access for Unprivileged Users

The domains= option has been added to the pam_sss module, which overrides the domains= option in the /etc/sssd/sssd.conf file. In addition, this update adds the pam_trusted_users option, which allows the user to add a list of numerical UIDs or user names that are trusted by the SSSD daemon, and the pam_public_domains option and a list of domains accessible even for untrusted users. The mentioned additions allow the configuration of systems, where regular users are allowed to access the specified applications, but do not have login rights on the system itself. For additional information on this feature, see the related SSSD documentation.

SSSD Integration for the Common Internet File System

A plug-in interface provided by SSSD has been added to configure the way in which the cifs-utils utility conducts the ID-mapping process. As a result, an SSSD client can now access a CIFS share with the same functionality as a client running the Winbind service. For further information, see the related SSSD documentation.

Support for Migration from WinSync to Trust

This update implements the new ID Views mechanism of user configuration. It enables the migration of Identity Management users from a WinSync synchronization-based architecture used by Active Directory to an infrastructure based on Cross-Realm Trusts. For the details of ID Views and the migration procedure, see the related FreeIPA documentation.

Automatic data provider configuration

The ipa-client-install command now by default configures SSSD as the data provider for the sudo service. This behavior can be disabled by using the --no-sudo option. In addition, the --nisdomain option has been added to specify the NIS domain name for the Identity Management client installation, and the --no_nisdomain option has been added to avoid setting the NIS domain name. If neither of these options are used, the IPA domain is used instead.

Use of AD and LDAP sudo Providers

The AD provider is a back end used to connect to an Active Directory server. In Red Hat Enterprise Linux 7.1, using the AD sudo provider together with the LDAP provider is supported as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the domain section of the sssd.conf file.

Chapter 12. Security

SCAP Security Guide

The scap-security-guide package has been included in Red Hat Enterprise Linux 7.1 to provide security guidance, baselines, and associated validation mechanisms. The guidance is specified in the Security Content Automation Protocol (SCAP), which constitutes a catalog of practical hardening advice. SCAP Security Guide contains the necessary data to perform system security compliance scans regarding prescribed security policy requirements; both a written description and an automated test (probe) are included. By automating the testing, SCAP Security Guide provides a convenient and reliable way to verify system compliance regularly.
The Red Hat Enterprise Linux 7.1 system administrator can use the oscap command line tool from the openscap-utils package to verify that the system conforms to the provided guidelines. See the scap-security-guide(8) manual page for further information.

SELinux Policy

In Red Hat Enterprise Linux 7.1, the SELinux policy has been modified; services without their own SELinux policy that previously ran in the init_t domain now run in the newly-added unconfined_service_t domain. See the Unconfined Processes chapter in the SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7.1.

New Features in OpenSSH

The OpenSSH set of tools has been updated to version 6.6.1p1, which adds several new features related to cryptography:
  • Key exchange using elliptic-curve Diffie-Hellman in Daniel Bernstein's Curve25519 is now supported. This method is now the default provided both the server and the client support it.
  • Support has been added for using the Ed25519 elliptic-curve signature scheme as a public key type. Ed25519, which can be used for both user and host keys, offers better security than ECDSA and DSA as well as good performance.
  • A new private-key format has been added that uses the bcrypt key-derivation function (KDF). By default, this format is used for Ed25519 keys but may be requested for other types of keys as well.
  • A new transport cipher, chacha20-poly1305@openssh.com, has been added. It combines Daniel Bernstein's ChaCha20 stream cipher and the Poly1305 message authentication code (MAC).

New Features in Libreswan

The Libreswan implementation of IPsec VPN has been updated to version 3.12, which adds several new features and improvements:
  • New ciphers have been added.
  • IKEv2 support has been improved (mainly with regard to CP payloads, CREATE_CHILD_SA requests, and the newly introduced support for Authenticated Header (AH).
  • Intermediary certificate chain support has been added in IKEv1 and IKEv2.
  • Connection handling has been improved.
  • Interoperability has been improved with OpenBSD, Cisco, and Android systems.
  • systemd support has been improved.
  • Support has been added for hashed CERTREQ and traffic statistics.

New Features in TNC

The Trusted Network Connect (TNC) Architecture, provided by the strongimcv package, has been updated and is now based on strongSwan 5.2.0. The following new features and improvements have been added to the TNC:
  • The PT-EAP transport protocol (RFC 7171) for Trusted Network Connect has been added.
  • The Attestation IMC/IMV pair now supports the IMA-NG measurement format.
  • The Attestation IMV support has been improved by implementing a new TPMRA work item.
  • Support has been added for a JSON-based REST API with SWID IMV.
  • The SWID IMC can extract all installed packages from the dpkg, rpm, or pacman package managers using the swidGenerator, which generates SWID tags according to the new ISO/IEC 19770-2:2014 standard.
  • The libtls TLS 1.2 implementation as used by EAP-(T)TLS and other protocols has been extended by AEAD mode support, currently limited to AES-GCM.
  • The aikgen tool now generates an Attestation Identity Key bound to a TPM.
  • Improved IMVs support for sharing access requestor ID, device ID, and product information of an access requestor via a common imv_session object.
  • Several bugs have been fixed in existing IF-TNCCS (PB-TNC, IF-M (PA-TNC)) protocols, and in the OS IMC/IMV pair.

New Features in GnuTLS

The GnuTLS implementation of the SSL, TLS, and DTLS protocols has been updated to version 3.3.8, which offers a number of new features and improvements:
  • Support for DTLS 1.2 has been added.
  • Support for Application Layer Protocol Negotiation (ALPN) has been added.
  • The performance of elliptic-curve cipher suites has been improved.
  • New cipher suites, RSA-PSK and CAMELLIA-GCM, have been added.
  • Native support for the Trusted Platform Module (TPM) standard has been added.
  • Support for PKCS#11 smart cards and hardware security modules (HSM) has been improved in several ways.
  • Compliance with the FIPS 140 security standards (Federal Information Processing Standards) has been improved in several ways.

Chapter 13. Desktop

Support for Quad-buffered OpenGL Stereo Visuals

GNOME Shell and the Mutter compositing window manager now allow you to use quad-buffered OpenGL stereo visuals on supported hardware. You need to have the NVIDIA Display Driver version 337 or later installed to be able to properly use this feature.

Online Account Providers

A new GSettings key org.gnome.online-accounts.whitelisted-providers has been added to GNOME Online Accounts (provided by the gnome-online-accounts package). This key provides a list of online account providers that are explicitly allowed to be loaded on startup. By specifying this key, system administrators can enable appropriate providers or selectively disable others.

Chapter 14. Supportability and Maintenance

ABRT Authorized Micro-Reporting

In Red Hat Enterprise Linux 7.1, the Automatic Bug Reporting Tool (ABRT) receives tighter integration with the Red Hat Customer Portal and is capable of directly sending micro-reports to the Portal. This allows ABRT to provide users with aggregated crash statistics. In addition, ABRT has the option to use the entitlement certificates or user's Portal credentials to authorize micro-reports, which simplifies the configuration of this feature.
The integrated authorization allows ABRT to reply to a micro-report with a rich text which may include possible steps to fix the cause of the micro-report. The authorization can also be used to enable notifications about important updates related to the micro-reports, and these notifications can be directly delivered to administrators.
Note that authorized micro-reporting is enabled automatically for customers who have already enabled ABRT micro-reports in Red Hat Enterprise Linux 7.0.
See the Customer Portal for more information on this feature.

Chapter 15. Red Hat Software Collections

Red Hat Software Collections is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures.
Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools.
Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose at any time which package version they want to run.

Important

Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle.
Red Hat Developer Toolset is now a part of Red Hat Software Collections, included as a separate Software Collection. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides the current versions of the GNU Compiler Collection, GNU Debugger, Eclipse development platform, and other development, debugging, and performance monitoring tools.
See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections.
See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more.

Part II. Device Drivers

This chapter provides a comprehensive listing of all device drivers which were updated in Red Hat Enterprise Linux 7.1.

Chapter 16. Storage Driver Updates

  • The hpsa driver has been upgraded to version 3.4.4-1-RH1.
  • The qla2xxx driver has been upgraded to version 8.07.00.08.07.1-k1.
  • The qla4xxx driver has been upgraded to version 5.04.00.04.07.01-k0.
  • The qlcnic driver has been upgraded to version 5.3.61.
  • The netxen_nic driver has been upgraded to version 4.0.82.
  • The qlge driver has been upgraded to version 1.00.00.34.
  • The bnx2fc driver has been upgraded to version 2.4.2.
  • The bnx2i driver has been upgraded to version 2.7.10.1.
  • The cnic driver has been upgraded to version 2.5.20.
  • The bnx2x driver has been upgraded to version 1.710.51-0.
  • The bnx2 driver has been upgraded to version 2.2.5.
  • The megaraid_sas driver has been upgraded to version 06.805.06.01-rc1.
  • The mpt2sas driver has been upgraded to version 18.100.00.00.
  • The ipr driver has been upgraded to version 2.6.0.
  • The kmod-lpfc packages have been added to Red Hat Enterprise Linux 7, which ensures greater stability when using the lpfc driver with Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) adapters. The lpfc driver has been upgraded to version 0:10.2.8021.1.
  • The be2iscsi driver has been upgraded to version 10.4.74.0r.
  • The nvme driver has been upgraded to version 0.9.

Chapter 17. Network Driver Updates

  • The bna driver has been upgraded to version 3.2.23.0r.
  • The cxgb3 driver has been upgraded to version 1.1.5-ko.
  • The cxgb3i driver has been upgraded to version 2.0.0.
  • The iw_cxgb3 driver has been upgraded to version 1.1.
  • The cxgb4 driver has been upgraded to version 2.0.0-ko.
  • The cxgb4vf driver has been upgraded to version 2.0.0-ko.
  • The cxgb4i driver has been upgraded to version 0.9.4.
  • The iw_cxgb4 driver has been upgraded to version 0.1.
  • The e1000e driver has been upgraded to version 2.3.2-k.
  • The igb driver has been upgraded to version 5.2.13-k.
  • The igbvf driver has been upgraded to version 2.0.2-k.
  • The ixgbe driver has been upgraded to version 3.19.1-k.
  • The ixgbevf driver has been upgraded to version 2.12.1-k.
  • The i40e driver has been upgraded to version 1.0.11-k.
  • The i40evf driver has been upgraded to version 1.0.1.
  • The e1000 driver has been upgraded to version 7.3.21-k8-NAPI.
  • The mlx4_en driver has been upgraded to version 2.2-1.
  • The mlx4_ib driver has been upgraded to version 2.2-1.
  • The mlx5_core driver has been upgraded to version 2.2-1.
  • The mlx5_ib driver has been upgraded to version 2.2-1.
  • The ocrdma driver has been upgraded to version 10.2.287.0u.
  • The ib_ipoib driver has been upgraded to version 1.0.0.
  • The ib_qib driver has been upgraded to version 1.11.
  • The enic driver has been upgraded to version 2.1.1.67.
  • The be2net driver has been upgraded to version 10.4r.
  • The tg3 driver has been upgraded to version 3.137.
  • The r8169 driver has been upgraded to version 2.3LK-NAPI.

Chapter 18. Graphics Driver Updates

  • The vmwgfx driver has been upgraded to version 2.6.0.0.

Revision History

Revision History
Revision 1.0-9Wed Jan 14 2015Milan Navrátil
Release of the Red Hat Enterprise Linux 7.1 Release Notes.
Revision 1.0-8Thu Dec 15 2014Jiří Herrmann
Release of the Red Hat Enterprise Linux 7.1 Beta Release Notes.