Installing and Configuring Neutron (Provider-LinuxBridge)
There are many OpenStack network modes. The provider mode or self-service mode can be used. The deployment mode can be LinuxBridge or OVS. In actual deployment, you only need to select one mode from provider+LinuxBridge, provider+OVS, self-service+LinuxBridge and self-service+OVS.
Controller Node
Perform the following operations on the controller node of the provider-LinuxBridge network type.
- Install components.
1yum -y install openstack-neutron openstack-neutron-ml2 ebtables
- Modify the /etc/neutron/neutron.conf configuration file.
- Open the file.
vi /etc/neutron/neutron.conf
- Press i to enter the insert mode and perform the following configurations.
- Configure database access.
1 2
[database] connection = mysql+pymysql://neutron:PASSWORD@controller/neutron
- Enable the ML2 plugin and disable other plugins.
1 2 3 4 5 6 7
[DEFAULT] core_plugin = ml2 service_plugins = transport_url = rabbit://openstack:PASSWORD@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true
- Leave service_plugins unspecified.
- Replace PASSWORD with the password of the openstack user described in Installing RabbitMQ.
- Configure Identity service access.
1 2 3 4 5 6 7 8 9 10
[keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = PASSWORD
- Configure parameters in the [nova] section.
1 2 3 4 5 6 7 8 9
[nova] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = PASSWORD
- By default, the configuration file does not contain this section. You need to add it.
- Replace PASSWORD with the password of the nova user described in Creating the Nova Database.
- Configure the lock path.
1 2
[oslo_concurrency] lock_path = /var/lib/neutron/tmp
- Configure database access.
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the file.
- Modify the ML2 plugin /etc/neutron/plugins/ml2/ml2_conf.ini.
- Open the file.
vi /etc/neutron/plugins/ml2/ml2_conf.ini
- Press i to enter the insert mode and add the following content to the file to enable the flat and VLAN networks:
1 2 3 4 5 6 7 8 9 10 11
[ml2] type_drivers = flat,vlan tenant_network_types = mechanism_drivers = linuxbridge extension_drivers = port_security [ml2_type_flat] flat_networks = provider-arm,provider-x86 [ml2_type_vlan] network_vlan_ranges = provider-arm,provider-x86 [securitygroup] enable_ipset = true
Leave tenant_network_types unspecified.
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the file.
- Check that the Linux OS kernel supports bridge filters.
- Open the /etc/sysctl.conf file.
vi /etc/sysctl.conf
- Press i to enter the insert mode and add the following content to the file:
1 2
net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1

- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the /etc/sysctl.conf file.
- Add a network bridge filter.
1 2 3
modprobe br_netfilter sysctl -p sed -i '$amodprobe br_netfilter' /etc/rc.local
- Initialize the network.
1ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- Populate the database.
1 2
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
- Enable the network service and configure it to start as the system boots.
1 2
systemctl enable neutron-server.service systemctl start neutron-server.service
Network Nodes
Perform the following operations on the network nodes (x86-compute and arm-compute) of the provider-LinuxBridge network type.
- Install components.
1yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
- Modify the /etc/neutron/neutron.conf file to configure common components.
- Open the file.
vi /etc/neutron/neutron.conf
- Press i to enter the insert mode and perform the following configurations.
Use RabbitMQ as the connection information of the message queue.
1 2
[DEFAULT] transport_url = rabbit://openstack:PASSWORD@controller
Configure Identity service access.1 2 3 4 5 6 7 8 9 10 11 12
[DEFAULT] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = PASSWORD
Configure the lock path.1 2
[oslo_concurrency] lock_path = /var/lib/neutron/tmp
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the file.
- Configure the DHCP agent.
- Open the /etc/neutron/dhcp_agent.ini file.
vi /etc/neutron/dhcp_agent.ini
- Press i to enter the insert mode and perform the following configurations.
- For the x86 network nodes, which are az-x86 nodes, add the following configuration:
1 2 3 4 5 6
[DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true [AGENT] availability_zone = az-x86
- For the Arm network nodes, which are az-arm nodes, add the following configuration:
1 2 3 4 5 6
[DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true [AGENT] availability_zone = az-arm
- For the x86 network nodes, which are az-x86 nodes, add the following configuration:
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the /etc/neutron/dhcp_agent.ini file.
- Configure the metadata agent.
- Open the /etc/neutron/metadata_agent.ini file.
vi /etc/neutron/metadata_agent.ini
- Press i to enter the insert mode, and configure the metadata host and shared key:
1 2 3
[DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = PASSWORD
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the /etc/neutron/metadata_agent.ini file.
- Configure the Linux bridge agent and modify the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file.
- Open the file.
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
- Press i to enter the insert mode and map the provider virtual network to the physical network.
- For the x86 network nodes, which are az-x86 nodes, configure provider-x86:
1 2
[linux_bridge] physical_interface_mappings = provider-x86:enp64s0
For the Arm network nodes, which are az-arm nodes, configure provider-arm:1 2
[linux_bridge] physical_interface_mappings = provider-arm:enp64s0
In this example, the provider network uses the enp64s0 network port. Set the network port based on actual requirements. The physical NIC is configured for the service network, not for the management network. For details, see Cluster Environment. In this document, enp64s0 is used as an example.
- Disable the VXLAN network.
1 2
[vxlan] enable_vxlan = false
- Enable the security group, configure the iptables firewall driver for the Linux bridge, save the configuration, and exit.
1 2 3 4
[securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
- For the x86 network nodes, which are az-x86 nodes, configure provider-x86:
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the file.
- Check that the Linux OS kernel supports bridge filters.
- Open the file.
vi /etc/sysctl.conf
- Press i to enter the insert mode and add the following content to the file:
1 2
net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1

- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the file.
- Add the network bridge filter.
1 2 3
modprobe br_netfilter sysctl -p sed -i '$amodprobe br_netfilter' /etc/rc.local
- Enable the network service and configure it to start as the system boots.
1 2 3 4
systemctl enable neutron-linuxbridge-agent.service \ neutron-dhcp-agent.service neutron-metadata-agent.service systemctl start neutron-linuxbridge-agent.service \ neutron-dhcp-agent.service neutron-metadata-agent.service
Compute Nodes
Perform the following operations on the compute nodes (x86-compute and arm-compute) of the provider-LinuxBridge network type. Because the network node and compute node are deployed on the same node, skip the repeated configurations if there are any.
- Install components.
1yum -y install openstack-neutron-linuxbridge ebtables ipset
- Edit the /etc/neutron/neutron.conf file to configure public components.
- Open the file.
vi /etc/neutron/neutron.conf
- Press i to enter the insert mode and perform the following configurations.
- Configure RabbitMQ message queue access.
1 2
[DEFAULT] transport_url = rabbit://openstack:PASSWORD@controller
- Configure Identity service access.
1 2 3 4 5 6 7 8 9 10 11 12
[DEFAULT] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = PASSWORD
- Configure the lock path.
1 2
[oslo_concurrency] lock_path = /var/lib/neutron/tmp
- Configure RabbitMQ message queue access.
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the file.
- Add the following to the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file to configure the Linux bridge agent:
- Open the file.
vi /etc/neutron/neutron.conf
- Press i to enter the insert mode and perform the following configurations.
- Map the provider virtual network to the provider physical network port.
- For the x86 compute nodes, which are az-x86 nodes, configure provider-x86:
1 2
[linux_bridge] physical_interface_mappings = provider-x86:enp64s0
- For the Arm compute nodes, which are az-arm nodes, configure provider-arm:
1 2
[linux_bridge] physical_interface_mappings = provider-arm:enp64s0
In this example, the provider network uses the enp64s0 network port. Set the network port based on actual requirements. The physical NIC is configured for the service network, not for the management network. For details, see Cluster Environment. In this document, enp64s0 is used as an example.
- For the x86 compute nodes, which are az-x86 nodes, configure provider-x86:
- Disable the VXLAN network.
1 2
[vxlan] enable_vxlan = false
- Enable the security group and configure the iptables firewall driver for the Linux bridge.
1 2 3 4
[securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
- Map the provider virtual network to the provider physical network port.
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the file.
- Check that the Linux OS kernel supports bridge filters.
- Open the /etc/sysctl.conf file.
vi /etc/sysctl.conf
- Press i to enter the insert mode and add the following content to the file:
1 2
net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1

- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the /etc/sysctl.conf file.
- Add the network bridge filter.
1 2 3
modprobe br_netfilter sysctl -p sed -i '$amodprobe br_netfilter' /etc/rc.local
- Enable the Linux bridge agent and configure it to start as the system boots.
1 2
systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.service