diff --git a/_topic_map.yml b/_topic_map.yml index ffac2d18138a..19fa81b5f0eb 100644 --- a/_topic_map.yml +++ b/_topic_map.yml @@ -1058,12 +1058,6 @@ Topics: File: jenkins_slaves - Name: Other Container Images File: other_container_images -- Name: xPaaS Middleware Images - Dir: xpaas_images - Distros: openshift-online,openshift-enterprise,openshift-dedicated - Topics: - - Name: Overview - File: index - Name: Revision History File: revhistory_using_images Distros: openshift-enterprise,openshift-dedicated diff --git a/getting_started/developers_cli.adoc b/getting_started/developers_cli.adoc index 7ce7c00a643e..daf954b24852 100644 --- a/getting_started/developers_cli.adoc +++ b/getting_started/developers_cli.adoc @@ -93,8 +93,7 @@ Other images provided by {product-title} include: ifdef::openshift-enterprise,openshift-dedicated[] In addition, JBoss Middleware has put together a broad range of https://github.com/jboss-openshift/application-templates[{product-title} -templates] as well as xref:../using_images/xpaas_images/index.adoc#using-images-xpaas-images-index[images] as -part of their xPaaS services. +templates]. The technologies available with the xPaaS services in particular include: diff --git a/install/configuring_inventory_file.adoc b/install/configuring_inventory_file.adoc index 861e07601ba9..fc5198160e59 100644 --- a/install/configuring_inventory_file.adoc +++ b/install/configuring_inventory_file.adoc @@ -1725,7 +1725,6 @@ openshift_logging_es_pvc_dynamic=true ---- For additional information on dynamic provisioning, see -xref:../install_config/persistent_storage/dynamically_glusterfs.adoc#overview[Using GlusterFS] and xref:../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Dynamic provisioning and creating storage classes]. diff --git a/using_images/xpaas_images/a_mq.adoc b/using_images/xpaas_images/a_mq.adoc deleted file mode 100644 index a6ba91964e85..000000000000 --- a/using_images/xpaas_images/a_mq.adoc +++ /dev/null @@ -1,205 +0,0 @@ -[[using-images-xpaas-images-a-mq]] -= Red Hat JBoss A-MQ xPaaS Image -{product-author} -{product-version} -:data-uri: -:icons: -:experimental: -:toc: macro -:toc-title: -:prewrap!: - -toc::[] - -== Overview - -Red Hat JBoss A-MQ (JBoss A-MQ) is available as a containerized xPaaS image that -is designed for use with OpenShift. It allows developers to quickly deploy an -A-MQ message broker in a hybrid cloud environment. - -[IMPORTANT] -==== -There are significant differences in supported configurations and functionality -in the JBoss A-MQ image compared to the regular release of JBoss A-MQ. -==== - -This topic details the differences between the JBoss A-MQ xPaaS image and the -regular release of JBoss A-MQ, and provides instructions specific to running and -configuring the JBoss A-MQ xPaaS image. Documentation for other JBoss A-MQ -functionality not specific to the JBoss A-MQ xPaaS image can be found in the -https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/[JBoss A-MQ -documentation on the Red Hat Customer Portal]. - -== Differences Between the JBoss A-MQ xPaaS Image and the Regular Release of JBoss A-MQ - -There are several major functionality differences in the OpenShift JBoss A-MQ -xPaaS image: - -* The Karaf shell is not available. -* The Fuse Management Console (Hawtio) is not available. -* Configuration of the broker can be performed: -** using parameters specified in the A-MQ application template, as described in -xref:configuring-params[Application Template Parameters]. -** using the S2I (Source-to-image) tool, as described in -xref:configuring-sti[Configuration Using S2I]. - -ifdef::openshift-enterprise[] -== Using the JBoss A-MQ xPaaS Image Streams and Application Templates -The Red Hat xPaaS middleware images were -xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[automatically created during the installation] -of OpenShift along with the other default image streams and templates. -endif::[] - -== Configuring the JBoss A-MQ Image - -[[configuring-params]] -=== Application Template Parameters - -Basic configuration of the JBoss A-MQ xPaaS image is performed by specifying -values of application template parameters. The following parameters can be -configured: - -`*AMQ_RELEASE*`:: - The JBoss A-MQ release version. This determines which JBoss A-MQ image will be - used as a basis for the application. At the moment, only version _6.2_ is - available. -`*APPLICATION_NAME*`:: - The name of the application used internally in OpenShift. It is used in names - of services, pods, and other objects within the application. -`*MQ_USERNAME*`:: - The user name used for authentication to the broker. In a standard - non-containerized JBoss A-MQ, you would specify the user name in the - *_AMQ_HOME/opt/user.properties_* file. If no value is specified, a random user - name is generated. -`*MQ_PASSWORD*`:: - The password used for authentication to the broker. In a standard - non-containerized JBoss A-MQ, you would specify the password in the - *_AMQ_HOME/opt/user.properties_* file. If no value is specified, a random - password is generated. -`*AMQ_ADMIN_USERNAME*`:: - The user name used as an admin authentication to the broker. If no value is specified, a random user name is generated. -`*AMQ_ADMIN_PASSWORD*`:: - The password used for authentication to the broker. If no value is specified, a random password is generated. -`*MQ_PROTOCOL*`:: - Comma-separated list of the messaging protocols used by the broker. Available - options are _amqp_, _mqtt_, _openwire_, and _stomp_. If left empty, all - available protocols will be available. Note that for integration of the - image with Red Hat JBoss Enterprise Application Platform, the _openwire_ - protocol must be specified, while other protocols can be optionally specified - as well. -`*MQ_QUEUES*`:: - Comma-separated list of queues available by default on the broker on its - startup. -`*MQ_TOPICS*`:: - Comma-separated list of topics available by default on the broker on its - startup. -`*AMQ_SECRET*`:: - The name of a secret containing SSL related files. If no value is specified, a random password is generated. -`*AMQ_TRUSTSTORE*`:: - The SSL trust store filename. If no value is specified, a random password is generated. -`*AMQ_KEYSTORE*`:: -The SSL key store filename. If no value is specified, a random password is generated. - -[[configuring-sti]] -=== Configuration Using S2I - -Configuration of the JBoss A-MQ image can also be modified using the -Source-to-image feature, described in full detail at -xref:../../creating_images/s2i.adoc#creating-images-s2i[S2I Requirements]. - -Custom A-MQ broker configuration can be specified by creating an -*_openshift-activemq.xml_* file inside the *_git_* directory of your -application's Git project root. On each commit, the file will be copied to the -*_conf_* directory in the A-MQ root and its contents used to configure the -broker. - -== Configuring the JBoss A-MQ Persistent Image - -[[configuring-params-persistence]] -=== Application Template Parameters - -Basic configuration of the JBoss A-MQ Persistent xPaaS image is performed by specifying -values of application template parameters. The following parameters can be -configured: - -`*AMQ_RELEASE*`:: - The JBoss A-MQ release version. This determines which JBoss A-MQ image will be - used as a basis for the application. At the moment, only version _6.2_ is - available. -`*APPLICATION_NAME*`:: - The name of the application used internally in OpenShift. It is used in names - of services, pods, and other objects within the application. -`*MQ_PROTOCOL*`:: - Comma-separated list of the messaging protocols used by the broker. Available - options are _amqp_, _mqtt_, _openwire_, and _stomp_. If left empty, all - available protocols will be available. Note that for integration of the - image with Red Hat JBoss Enterprise Application Platform, the _openwire_ - protocol must be specified, while other protocols can be optionally specified - as well. -`*MQ_QUEUES*`:: - Comma-separated list of queues available by default on the broker on its - startup. -`*MQ_TOPICS*`:: - Comma-separated list of topics available by default on the broker on its - startup. -`*VOLUME_CAPACITY*`:: - The size of the persistent storage for database volumes. -`*MQ_USERNAME*`:: - The user name used for authentication to the broker. In a standard - non-containerized JBoss A-MQ, you would specify the user name in the - *_AMQ_HOME/opt/user.properties_* file. If no value is specified, a random user - name is generated. -`*MQ_PASSWORD*`:: - The password used for authentication to the broker. In a standard - non-containerized JBoss A-MQ, you would specify the password in the - *_AMQ_HOME/opt/user.properties_* file. If no value is specified, a random - password is generated. -`*AMQ_ADMIN_USERNAME*`:: - The user name used as an admin authentication to the broker. If no value is specified, a random user name is generated. -`*AMQ_ADMIN_PASSWORD*`:: - The password used for authentication to the broker. If no value is specified, a random password is generated. -`*AMQ_SECRET*`:: - The name of a secret containing SSL related files. If no value is specified, a random password is generated. -`*AMQ_TRUSTSTORE*`:: - The SSL trust store filename. If no value is specified, a random password is generated. -`*AMQ_KEYSTORE*`:: - The SSL key store filename. If no value is specified, a random password is generated. - -For more information, see -xref:../../dev_guide/persistent_volumes.adoc#dev-guide-persistent-volumes[Using Persistent Volumes]. - -== Security - -Only SSL connections can connect from outside of the OpenShift instance, -regardless of the protocol specified in the `*MQ_PROTOCOL*` property of the A-MQ -application templates. The non-SSL version of the protocols can only be used -inside the OpenShift instance. - -For security reasons, using the default KeyStore and TrustStore generated by the -system is discouraged. It is recommended to generate your own KeyStore and -TrustStore and supply them to the image using the OpenShift secrets mechanism or -S2I. - -== High-Availability and Scalability - -The JBoss xPaaS A-MQ image is supported in two modes: - -1. A single A-MQ pod mapped to a Persistent Volume for message persistence. This mode provides message High Availability and guaranteed messaging but does not provide scalability. - -2. Multiple A-MQ pods using local message persistence (i.e. no mapped Persistent Volume). This mode provides scalability but does not provide message High Availability or guaranteed messaging. - -== Logging - -In addition to viewing the OpenShift logs, you can troubleshoot a running JBoss -A-MQ image by viewing the JBoss A-MQ logs that are outputted to the container's -console: - ----- -$ oc logs -f ----- - -[NOTE] -==== -By default, the OpenShift JBoss A-MQ xPaaS image does not have a file log -handler configured. Logs are only sent to the console. -==== diff --git a/using_images/xpaas_images/data_grid.adoc b/using_images/xpaas_images/data_grid.adoc deleted file mode 100644 index 26b57ba9bd1c..000000000000 --- a/using_images/xpaas_images/data_grid.adoc +++ /dev/null @@ -1,429 +0,0 @@ -[[using-images-xpaas-images-data-grid]] -= Red Hat JBoss Data Grid xPaaS Image -{product-author} -{product-version} -:data-uri: -:icons: -:experimental: -:toc: macro -:toc-title: - -toc::[] - -== Overview - -Red Hat JBoss Data Grid is available as a containerized xPaaS image that is designed for use with OpenShift. This image provides an in-memory distributed database so that developers can quickly access large amounts of data in a hybrid environment. - -[IMPORTANT] -There are significant differences in supported configurations and functionality -in the JBoss Data Grid xPaaS image compared to the full, non-PaaS release of JBoss Data Grid. - -This topic details the differences between the JBoss Data Grid xPaaS image and the -full, non-PaaS release of JBoss Data Grid, and provides instructions specific to running and -configuring the JBoss Data Grid xPaaS image. Documentation for other JBoss Data Grid -functionality not specific to the JBoss Data Grid xPaaS image can be found in the -https://access.redhat.com/documentation/en/red-hat-jboss-data-grid/[JBoss -Data Grid documentation on the Red Hat Customer Portal]. - -== Comparing the JBoss Data Grid xPaaS Image to the Regular Release of JBoss Data Grid - -=== Functionality Differences for OpenShift JBoss Data Grid xPaaS Images - -There are several major functionality differences in the OpenShift JBoss Data Grid xPaaS image: - -* The JBoss Data Grid Management Console is not available xref:Managing-OpenShift-JBoss-Data-Grid-xPaaS-Images[to manage OpenShift JBoss Data Grid xPaaS images]. -* The JBoss Data Grid Management CLI is only bound locally. This means that you can only access the Management CLI of a container from within the pod. -* Library mode is not supported. -* Only JDBC is supported for a backing cache-store. Support for remote cache stores are present only for data migration purposes. - -[[jdg-clustering]] -=== Forming a Cluster using the OpenShift JBoss Data Grid xPaaS Images - -Clustering is achieved through one of two discovery mechanisms: Kubernetes or DNS. This -is accomplished by configuring the JGroups protocol stack in *_clustered-openshift.xml_* with either -the *__* or *__* elements. By default *_KUBE_PING_* is the -pre-configured and supported protocol. - -For *_KUBE_PING_* to work the following steps must be taken: - -1. The *_OPENSHIFT_KUBE_PING_NAMESPACE_* environment variable must be set (as seen in the xref:jdg-configuration-environment-variables-table[Configuration Environment Variables]). -If this variable is not set, then the server will act as if it is a single-node cluster, or a cluster that consists of only one node. -+ -2. The *_OPENSHIFT_KUBE_PING_LABELS_* environment variable must be set (as seen in the xref:jdg-configuration-environment-variables-table[Configuration Environment Variables]). -If this variable is not set, then pods outside the application (but in the same namespace) will attempt to join. -+ -3. Authorization must be granted to the service account the pod is running under to be allowed to Kubernetes' REST api. This is done on the -command line: -+ -.Policy commands -==== -Using the *_default_* service account in the *_myproject_* namespace: ----- -oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q) ----- - -Using the *_eap-service-account_* in the *_myproject_* namespace: ----- -oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q) ----- -==== - -Once the above is configured images will automatically join the cluster as they are deployed; -however, removing images from an active cluster, and therefore shrinking the cluster, -is not supported. - -[[jdg-endpoints]] -=== Endpoints - -Clients can access JBoss Data Grid via REST, HotRod, and memcached endpoints defined as usual in the cache's configuration. - -If a client attempts to access a cache via HotRod and is in the same project it will be able to receive -the full cluster view and make use of consistent hashing; however, if it is in another project then the -client will unable to receive the cluster view. Additionally, if the client is located outside of the -project that contains the HotRod cache there will be additional latency due to extra network hops -being required to access the cache. - -[IMPORTANT] -Only caches with an exposed REST endpoint will be accessible outside of OpenShift. - -[[jdg-configuring-caches]] -=== Configuring Caches - -A list of caches may be defined by the *_CACHE_NAMES_* environment variable. By default the -following caches are created: - -* *_default_* -* *_memcached_* - -Each cache's behavior may be controlled through the use of cache-specific environment variables, with -each environment variable expecting the cache's name as the prefix. For instance, consider the *_default_* cache, -any configuration applied to this cache must begin with the *_DEFAULT__* prefix. To define the number of cache entry owners -for each entry in this cache the *_DEFAULT_CACHE_OWNERS_* environment variable would be used. - -A full list of these is found at xref:jdg-cache-environment-variables[Cache Environment Variables]. - -[[jdg-datasources]] -=== Datasources - -Datasources are automatically created based on the value of some environment variables. - -The most important variable is the *_DB_SERVICE_PREFIX_MAPPING_* which defines JNDI mappings for -datasources. It must be set to a comma-separated list of *__= triplet, where -*_name_* is used as the pool-name in the datasource, *_database_type_* determines which database driver to use, -and *_PREFIX_* is the prefix used in the names of environment variables, which are used to configure the datasource. - -[[jdg-jndi-mappings-for-datasources]] -==== JNDI Mappings for Datasources - -For each *_-database_type>=PREFIX_* triplet in the *_DB_SERVICE_PREFIX_MAPPING_* environment -variable, a separate datasource will be created by the launch script, which is executed when running the image. - -The *__* will determine the driver for the datasource. Currently, only *_postgresql_* and *_mysql_* -are supported. - -The *__* parameter can be chosen on your own. Do not use any special characters. - -[NOTE] -The first part (before the equal sign) of the *_DB_SERVICE_PREFIX_MAPPING_* should be lowercase. - -[[jdg-database-drivers]] -==== Database Drivers - -The JBoss Data Grid xPaaS image contains Java drivers for MySQL, PostgreSQL, and MongoDB -databases deployed. Datasources are *generated only for MySQL and PostGreSQL databases*. - -[NOTE] -For MongoDB databases there are no JNDI mappings created because this is not a SQL database. - -[[jdg-database-drivers-examples]] -==== Examples - -The following examples demonstrate how datasources may be defined using the *_DB_SERVICE_PREFIX_MAPPING_* -environment variable. - -[[jdg-single-mapping]] -===== Single Mapping - -Consider the value *_test-postgresql=TEST_*. - -This will create a datasource named *_java:jboss/datasources/test_postgresql_*. Additionally, all of the required settings, -such as username and password, will be expected to be provided as environment variables with the *_TEST__* prefix, such as -*_TEST_USERNAME_* and *_TEST_PASSWORD_*. - -[[jdg-mulitple-mappings]] -===== Multiple Mappings - -Multiple database mappings may also be specified; for instance, considering the following value for the *_DB_SERVICE_PREFIX_MAPPING_* -environment variable: *_cloud-postgresql=CLOUD,test-mysql=TEST_MYSQL_*. - -[NOTE] -Multiple datasource mappings should be separated with commas, as seen in the above example. - -This will create two datasources: - -1. *_java:jboss/datasources/test_mysql_* -+ -2. *_java:jboss/datasources/cloud_postgresql_* - -MySQL datasource configuration, such as the username and password, will be expected with the *_TEST_MYSQL_* prefix, -for example *_TEST_MYSQL_USERNAME_*. Similarly the PostgreSQL datasource will expect to have environment variables -defined with the *_CLOUD__* prefix, such as *_CLOUD_USERNAME_*. - -[[jdg-datasource-environment-variables]] -==== Environment Variables - -A full list of datasource environment variables may be found at xref:jdg-datasource-environment-variables-list[Datasource Environment Variables]. - -[[jdg-security-domains]] -=== Security Domains - -To configure a new Security Domain the *_SECDOMAIN_NAME_* environment variable must be defined, which will result -in the creation of a security domain named after the passed in value. This domain may be configured through the use -of the xref:jdg-security-environment-variables[Security Environment Variables]. - -[[Managing-OpenShift-JBoss-Data-Grid-xPaaS-Images]] -=== Managing OpenShift JBoss Data Grid xPaaS Images - -A major difference in managing an OpenShift JBoss Data Grid xPaaS image is that there is no Management Console exposed for the JBoss Data Grid installation inside the image. Because images are intended to be immutable, with modifications being written to a non-persistent file system, the Management Console is not exposed. - -However, the JBoss Data Grid Management CLI (*_JDG_HOME/bin/jboss-cli.sh_*) is still -accessible from within the container for troubleshooting purposes. - -1. First open a remote shell session to the running pod: -+ ----- -$ oc rsh ----- -+ -2. Then run the following from the remote shell session to launch the JBoss Data Grid -Management CLI: -+ ----- -$ /opt/datagrid/bin/jboss-cli.sh ----- - -[WARNING] -Any configuration changes made using the JBoss Data Grid Management CLI on a running container will be lost when the container restarts. - -xref:Making-Configuration-Changes-Data-Grid[Making configuration changes to the -JBoss Data Grid instance inside the JBoss Data Grid xPaaS image] is different from the process you may be used to for a regular release of JBoss Data Grid. - -ifdef::openshift-enterprise[] -== Using the JBoss Data Grid xPaaS Image Streams and Application Templates - -The Red Hat xPaaS middleware images were -xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[automatically created during the installation] -of OpenShift along with the other default image streams and templates. -endif::[] - -[[Making-Configuration-Changes-Data-Grid]] -== Running and Configuring the JBoss Data Grid xPaaS Image - -You can make changes to the JBoss Data Grid configuration in the xPaaS image using either the S2I templates, or by using a modified JBoss Data Grid xPaaS image. - -=== Using the JBoss Data Grid xPaaS Image Source-to-Image (S2I) Process - -The recommended method to run and configure the OpenShift JBoss Data Grid xPaaS image is to use the OpenShift S2I process together with the application template parameters and environment variables. - -The S2I process for the JBoss Data Grid xPaaS image works as follows: - -. If there is a *_pom.xml_* file in the source repository, a Maven build is triggered with the contents of `*$MAVEN_ARGS*` environment variable. -+ -. By default the `package` goal is used with the `openshift` profile, including the system properties for skipping tests (`*-DskipTests*`) and enabling the Red Hat GA repository (`*-Dcom.redhat.xpaas.repo.redhatga*`). -+ -. The results of a successful Maven build are copied to *_JDG_HOME/standalone/deployments_*. This includes all JAR, WAR, and EAR files from the directory within the source repository specified by `*$ARTIFACT_DIR*` environment variable. The default value of `*$ARTIFACT_DIR*` is the *_target_* directory. -* Any JAR, WAR, and EAR in the *_deployments_* source repository directory are copied to the *_JDG_HOME/standalone/deployments_* directory. -* All files in the *_configuration_* source repository directory are copied to *_JDG_HOME/standalone/configuration_*. -+ -[NOTE] -If you want to use a custom JBoss Data Grid configuration file, it should be named *_clustered-openshift.xml_*. -. All files in the *_modules_* source repository directory are copied to *_JDG_HOME/modules_*. - -==== Using a Different JDK Version in the JBoss Data Grid xPaaS Image - -The JBoss Data Grid xPaaS image may come with multiple versions of OpenJDK installed, but only one is the default. For example, the JBoss Data Grid 6.5 xPaaS image comes with OpenJDK 1.7 and 1.8 installed, but OpenJDK 1.8 is the default. - -If you want the JBoss Data Grid xPaaS image to use a different JDK version than the default, you must: - -* Ensure that your *_pom.xml_* specifies to build your code using the intended JDK version. -* In the S2I application template, configure the image's `*JAVA_HOME*` environment variable to point to the intended JDK version. For example: -+ -==== - -[source,yaml] ----- -name: "JAVA_HOME" -value: "/usr/lib/jvm/java-1.7.0" ----- -==== - -=== Using a Modified JBoss Data Grid xPaaS Image - -An alternative method is to make changes to the image, and then use that modified image in OpenShift. - -The JBoss Data Grid configuration file that OpenShift uses inside the JBoss Data Grid xPaaS image is *_JDG_HOME/standalone/configuration/clustered-openshift.xml_*, and the JBoss Data Grid startup script is *_JDG_HOME/bin/openshift-launch.sh_*. - -You can run the JBoss Data Grid xPaaS image in Docker, make the required configuration changes using the JBoss Data Grid Management CLI (*_JDG_HOME/bin/jboss-cli.sh_*), and then commit the changed container as a new image. You can then use that modified image in OpenShift. - -[IMPORTANT] -It is recommended that you do not replace the OpenShift placeholders in the JBoss Data Grid xPaaS configuration file, as they are used to automatically configure services (such as messaging, datastores, HTTPS) during a container's deployment. These configuration values are intended to be set using environment variables. - -[NOTE] -Ensure that you follow the xref:../../creating_images/guidelines.adoc#creating-images-guidelines[guidelines for creating images]. - -[[jdg-environment-variables]] -== Environment Variables - -[[jdg-information-environment-variables]] -=== Information Environment Variables -The following information environment variables are designed to convey information about the image and should not be modified by the user: - -.Information Environment Variables -[options="header"] -|==================================== -| Variable Name | Description | Value -| *_JBOSS_DATAGRID_VERSION_* | The full, non-PaaS release that the xPaaS image is based from. | *_6.5.1.GA_* -| *_JBOSS_HOME_* | The directory where the JBoss distribution is located. | *_/opt/datagrid_* -| *_JBOSS_IMAGE_NAME_* | Image name, same as *_Name_* label | *_jboss-datagrid-6/datagrid65-openshift_* -| *_JBOSS_IMAGE_RELEASE_* | Image release, same as *_Release_* label | Example: dev -| *_JBOSS_IMAGE_VERSION_* | Image version, same as *_Version_* label | Example: *_1.2_* -| *_JBOSS_MODULES_SYSTEM_PKGS_* | | *_org.jboss.logmanager_* -| *_JBOSS_PRODUCT_* | | *_datagrid_* -| *_LAUNCH_JBOSS_IN_BACKGROUND_* | Allows the data grid server to be gracefully shutdown even when there is no terminal attached. | *_true_* -|==================================== - -[[jdg-configuration-environment-variables]] -=== Configuration Environment Variables -Configuration environment variables are designed to conveniently adjust the image without requiring a rebuild, and should be set by the user as desired. - -[[jdg-configuration-environment-variables-table]] -.Configuration Environment Variables -[options="header"] -|==================================== -| Variable Name | Description | Value -| *_CACHE_CONTAINER_START_* | Should this cache container be started on server startup, or lazily when requested by a service or deployment. Defaults to *_LAZY_* | Example: *_EAGER_* -| *_CACHE_CONTAINER_STATISTICS_* | Determines if the cache container collects statistics. Disable for optimal performance. Defaults to *_true_*. | Example: *_false_* -| *_CACHE_NAMES_* | List of caches to configure. Defaults to *_default,memcached_*, and each defined cache will be configured as a distributed-cache with a mode of *_SYNC_*. | Example: *_addressbook,addressbook_indexed_* -| *_CONTAINER_SECURITY_CUSTOM_ROLE_MAPPER_CLASS_* | Class of the custom principal to role mapper. | Example: *_com.acme.CustomRoleMapper_* -| *_CONTAINER_SECURITY_IDENTITY_ROLE_MAPPER_* | Set a role mapper for this cache container. Valid values are: *_identity-role-mapper_*,*_common-name-role-mapper_*,*_cluster-role-mapper_*,*_custom-role-mapper_*. | Example: *_identity-role-mapper_* -| *_CONTAINER_SECURITY_ROLES_* | Define role names and assign permissions to them. | Example: *_admin=ALL,reader=READ,writer=WRITE_* -| *_DB_SERVICE_PREFIX_MAPPING_* | Define a comma-separated list of datasources to configure. | Example: *_test-mysql=TEST_MYSQL_* -| *_DEFAULT_CACHE_* | Indicates the default cache for this cache container. | Example: *_addressbook_* -| *_ENCRYPTION_REQUIRE_SSL_CLIENT_AUTH_* | Whether to require client certificate authentication. Defaults to *_false_*. | Example: *_true_* -| *_HOTROD_AUTHENTICATION_* | If defined the hotrod-connectors will be configured with authentication in the *_ApplicationRealm_*. | Example: *_true_* -| *_HOTROD_ENCRYPTION_* | If defined the hotrod-connectors will be configured with encryption in the *_ApplicationRealm_*. | Example: *_true_* -| *_HOTROD_SERVICE_NAME_* | Name of the OpenShift service used to expose HotRod externally. | Example: *_DATAGRID_APP_HOTROD_* -| *_INFINISPAN_CONNECTORS_* | Comma-separated list of connectors to configure. Defaults to *_hotrod,memcached,rest_*. Note that if authorization or authentication is enabled on the cache then memcached should be removed as this protocol is inherently insecure. | Example: *_hotrod_* -| *_JAVA_OPTS_APPEND_* | The contents of *_JAVA_OPTS_APPEND_* is appended to *_JAVA_OPTS_* on startup. | Example: *_-Dfoo=bar_* -| *_JGROUPS_CLUSTER_PASSWORD_* | A password to control access to JGroups. Needs to be set consistently cluster-wide. The image default is to use the *_OPENSHIFT_KUBE_PING_LABELS_* variable value; however, the JBoss application templates generate and supply a random value. | Example: *_miR0JaDR_* -| *_MEMCACHED_CACHE_* | The name of the cache to use for the Memcached connector. | Example: *_memcached_* -| *_OPENSHIFT_KUBE_PING_LABELS_* | Clustering labels selector. | Example: *_application=eap-app_* -| *_OPENSHIFT_KUBE_PING_NAMESPACE_* | Clustering project namespace. | Example: *_myproject_* -| *_PASSWORD_* | Password for the JDG user. | Example: *_p@ssw0rd_* -| *_REST_SECURITY_DOMAIN_* | The security domain to use for authentication and authorization purposes. Defaults to *_none_* (no authentication). | Example: *_other_* -| *_TRANSPORT_LOCK_TIMEOUT_* | Infinispan uses a distributed lock to maintain a coherent transaction log during state transfer or rehashing, which means that only one cache can be doing state transfer or rehashing at the same time. This constraint is in place because more than one cache could be involved in a transaction. This timeout controls the time to wait to acquire a distributed lock. Defaults to *_240000_*. | Example: *_120000_* -| *_USERNAME_* | Username for the JDG user. | Example: *_openshift_* -|==================================== - -[[jdg-cache-environment-variables]] -=== Cache Environment Variables - -The following environment variables all control behavior of individual caches; when defining these values for a particular cache substitute the cache's name for *_CACHE_NAME_*. - -.Cache Environment Variables -[options="header"] -|================================ -| Variable Name | Description | Example Value -| *__CACHE_TYPE_* | Determines whether this cache should be distributed or replicated. Defaults to *_distributed_*. | *_replicated_* -| *__CACHE_START_* | Determines if this cache should be started on server startup, or lazily when requested by a service or deployment. Defaults to *_LAZY_*. | *_EAGER_* -| *__CACHE_BATCHING_* | Enables invocation batching for this cache. Defaults to *_false_*. | *_true_* -| *__CACHE_STATISTICS_* | Determines whether or not the cache collects statistics. Disable for optimal performance. Defaults to *_true_*. | *_false_* -| *__CACHE_MODE_* | Sets the clustered cache mode, *_ASYNC_* for asynchronous operations, or *_SYNC_* for synchronous operations. | *_ASYNC_* -| *__CACHE_QUEUE_SIZE_* | In *_ASYNC_* mode this attribute can be used to trigger flushing of the queue when it reaches a specific threshold. Defaults to *_0_*, which disables flushing. | *_100_* -| *__CACHE_QUEUE_FLUSH_INTERVAL_* | In *_ASYNC_* mode this attribute controls how often the asynchronous thread runs to flush the replication queue. This should be a positive integer that represents thread wakeup time in milliseconds. Defaults to *_10_*. | *_20_* -| *__CACHE_REMOTE_TIMEOUT_* | In *_SYNC_* mode the timeout, in milliseconds, used to wait for an acknowledgement when making a remote call, after which the call is aborted and an exception is thrown. Defaults to *_17500_*. | *_25000_* -| *__CACHE_OWNERS_* | Number of cluster-wide replicas for each cache entry. Defaults to *_2_*. | *_5_* -| *__CACHE_SEGMENTS_* | Number of hash space segments per cluster. The recommended value is 10 * cluster size. Defaults to *_80_*. | *_30_* -| *__CACHE_L1_LIFESPAN_* | Maximum lifespan, in milliseconds, of an entry placed in the L1 cache. Defaults to *_0_*, indicating that L1 is disabled. | *_100_*. -| *__CACHE_EVICTION_STRATEGY_* | Sets the cache eviction strategy. Available options are *_UNORDERED_*, *_FIFO_*, *_LRU_*, *_LIRS_*, and *_NONE_* (to disable eviction). Defaults to *_NONE_*. | *_FIFO_* -| *__CACHE_EVICTION_MAX_ENTRIES_* | Maximum number of entries in a cache instance. If selected value is not a power of two the actual value will default to the least power of two larger than the selected value. A value of *_-1_* indicates no limit. Defaults to *_10000_*. | *_-1_* -| *__CACHE_EXPIRATION_LIFESPAN_* | Maximum lifespan, in milliseconds, of a cache entry, after which the entry is expired cluster-wide. Defaults to *_-1_*, indicating that the entries never expire. | *_10000_* -| *__CACHE_EXPIRATION_MAX_IDLE_* | Maximum idle time, in milliseconds, a cache entry will be maintained in the cache. If the idle time is exceeded, then the entry will be expired cluster-wide. Defaults to *_-1_*, indicating that the entries never expire. | *_10000_* -| *__CACHE_EXPIRATION_INTERVAL_* | Interval, in milliseconds, between subsequent runs to purge expired entries from memory and any cache stores. If you wish to disable the periodic eviction process altogether, then set the interval to *_-1_*. Defaults to *_5000_*. | *_-1_* -| *__CACHE_COMPATIBILITY_ENABLED_* | Enables compatibility mode for this cache. Disabled by default. | *_true_* -| *__CACHE_COMPATIBILITY_MARSHALLER_* | A marshaller to use for compatibility conversions. | *_com.acme.CustomMarshaller_* -| *__JDBC_STORE_TYPE_* | Type of JDBC store to configure. This value may either be *_string_* or *_binary_*. | *_string_* -| *__JDBC_STORE_DATASOURCE_* | Defines the jndiname of the datasource. | *_java:jboss/datasources/ExampleDS_* -| *__KEYED_TABLE_PREFIX_* | Defines the prefix prepended to the cache name used when composing the name of the cache entry table. Defaults to *_ispn_entry_*. | *_JDG_* -| *__CACHE_INDEX_* | The indexing mode of the cache. Valid values are *_NONE_*, *_LOCAL_*, and *_ALL_*. Defaults to *_NONE_*. | *_ALL_* -| *__CACHE_INDEXING_PROPERTIES_* | Comma-separated list of properties to pass on to the indexing system. | *_default.directory_provider=ram_* -| *__CACHE_SECURITY_AUTHORIZATION_ENABLED_* | Enables authorization checks for this cache. Defaults to *_false_*. | *_true_* -| *__CACHE_SECURITY_AUTHORIZATION_ROLES_* | Sets the valid roles required to access this cache. | *_admin,reader,writer_* -| *__CACHE_PARTITION_HANDLING_ENABLED_* | If enabled, then the cache will enter degraded mode when it loses too many nodes. Defaults to *_true_*. | *_false_* -|================================ - -[[jdg-datasource-environment-variables-list]] -=== Datasource Environment Variables - -Datasource properties may be configured with the following environment variables: - -.Datasource Environment Variables -[options="header"] -|================================ -| Variable Name | Description | Example Value -| *___SERVICE_HOST_* | Defines the database server's hostname or IP to be used in the datasource's *_connection_url_* property. | *_192.168.1.3_* -| *__DATABASE_TYPE>_SERVICE_PORT_* | Defines the database server's port for the datasource. | *_5432_* -| *__JNDI_* | Defines the JNDI name for the datasource. Defaults to *_java:jboss/datasources/__*, where *_name_* and *_database_type_* are taken from the triplet definition. This setting is useful if you want to override the default generated JNDI name. | *_java:jboss/datasources/test-postgresql_* -| *__USERNAME_* | Defines the username for the datasource. | *_admin_* -| *__PASSWORD_* | Defines the password for the datasource. | *_password_* -| *__DATABASE_* | Defines the database name for the datasource. | *_myDatabase_* -| *__TX_ISOLATION_* | Defines the java.sql.Connection transaction isolation level for the database. | *_TRANSACTION_READ_UNCOMMITTED_* -| *__TX_MIN_POOL_SIZE_* | Defines the minimum pool size option for the datasource. | *_1_* -| *__TX_MAX_POOL_SIZE_* | Defines the maximum pool size option for the datasource. | *_20_* -|================================ - -[[jdg-security-environment-variables]] -=== Security Environment Variables - -The following environment variables may be defined to customize the environment's security domain: - -.Security Environment Variables -[options="header"] -|================================ -| Variable Name | Description | Example Value -| *_SECDOMAIN_NAME_* | Define in order to enable the definition of an additional security domain. | *_myDomain_* -| *_SECDOMAIN_PASSWORD_STACKING_* | If defined, the password-stacking module option is enabled and set to the value *_useFirstPass_*. | *_true_* -| *_SECDOMAIN_LOGIN_MODULE_* | The login module to be used. Defaults to *_UsersRoles_*. | *_UsersRoles_* -| *_SECDOMAIN_USERS_PROPERTIES_* | The name of the properties file containing user definitions. Defaults to *_users.properties_*. | *_users.properties_* -| *_SECDOMAIN_ROLES_PROPERTIES_* | The name of the properties file containing role definitions. Defaults to *_roles.properties_*. | *_roles.properties_* -|================================ - -[[jdg-exposed-ports]] -== Exposed Ports - -The following ports are exposed by default in the JBoss Data Grid xPaaS Image: - -[options="header"] -|=============================== -| Value | Description -| 8443 | Secure Web -| 8778 | - -| 11211 | memcached -| 11222 | internal hotrod -| 11333 | external hotrod -|=============================== - -[IMPORTANT] -The external hotrod connector is only available if the *_HOTROD_SERVICE_NAME_* environment variables has been defined. - -[[jdg-troubleshooting]] -== Troubleshooting - -In addition to viewing the OpenShift logs, you can troubleshoot a running JBoss Data Grid xPaaS Image container by viewing its logs. These are outputted to the container’s standard out, and are accessible with the following command: - ----- -$ oc logs -f ----- - -[NOTE] -By default, the OpenShift JBoss Data Grid xPaaS Image does not have a file log handler configured. Logs are only sent to the container's standard out. diff --git a/using_images/xpaas_images/decision_server.adoc b/using_images/xpaas_images/decision_server.adoc deleted file mode 100644 index 4d735e5f9080..000000000000 --- a/using_images/xpaas_images/decision_server.adoc +++ /dev/null @@ -1,265 +0,0 @@ -[[using-images-xpaas-images-decision-server]] -= Decision Server xPaaS Image -{product-author} -{product-version} -:data-uri: -:icons: -:experimental: -:toc: macro -:toc-title: - -toc::[] - -== Overview - -Decision Server is available as a containerized xPaaS image that is designed for use with OpenShift as an execution environment for business rules. Developers can quickly build, scale, and test applications deployed across hybrid environments. - -[IMPORTANT] -There are significant differences in supported configurations and functionality -in the Decision Server xPaaS image compared to the regular release of JBoss BRMS. - -This topic details the differences between the Decision Server xPaaS image and the -full, non-PaaS release of JBoss BRMS, and provides instructions specific to running and -configuring the Decision Server xPaaS image. Documentation for other JBoss BRMS -functionality not specific to the Decision Server xPaaS image can be found in the -https://access.redhat.com/documentation/en/red-hat-jboss-brms/[JBoss -BRMS documentation on the Red Hat Customer Portal]. - -`_EAP_HOME_` in this documentation, as in the -https://access.redhat.com/documentation/en/red-hat-jboss-brms/[JBoss -BRMS documentation], is used to refer to the JBoss EAP installation directory -where the decision server is deployed. The location of `_EAP_HOME_` inside a -Decision Server xPaaS image is *_/opt/eap/_*, which the `*JBOSS_HOME*` -environment variable is also set to by default. - -== Comparing the Decision Server xPaaS Image to the Regular Release of JBoss BRMS - -=== Functionality Differences for OpenShift Decision Server xPaaS Images - -There are several major functionality differences in the OpenShift Decision Server xPaaS image: - -* The Decision Server image extends the OpenShift EAP image, and any capabilities or limitations it has are also found in the Decision Server image. -* Only stateless scenarios are supported. -* Authoring of any content through the BRMS Console or API is not supported. - -[[Managing-OpenShift-Decision-Server-xPaaS-Images]] -=== Managing OpenShift Decision Server xPaaS Images - -As the Decision Server image is built off the OpenShift JBoss EAP xPaaS image, the JBoss EAP Management CLI -is accessible from within the container for troubleshooting purposes. - -. First open a remote shell session to the running pod: -+ ----- -$ oc rsh ----- -+ -. Then run the following from the remote shell session to launch the JBoss EAP -Management CLI: -+ ----- -$ /opt/eap/bin/jboss-cli.sh ----- - -[WARNING] -Any configuration changes made using the JBoss EAP Management CLI on a running container will be lost when the container restarts. - -xref:Making-Configuration-Changes-Decision-Server[Making configuration changes to the -JBoss EAP instance inside the JBoss EAP xPaaS image] is different from the process you may be used to for a regular release of JBoss EAP. - -[[Security-Openshift-Decision-Server-xPaaS-Image]] -=== Security in the OpenShift Decision Server xPaaS Image - -Access is limited to users with the *_kie-server_* authorization role. A user with this role -can be specified via the *_KIE_SERVER_USER_* and *_KIE_SERVER_PASSWORD_* environment variables. - -[NOTE] -The HTTP/REST endpoint is configured to only allow the execution of KIE containers and querying -of KIE Server resources. Administrative functions like creating or disposing Containers, updating -ReleaseIds or Scanners, etc. are restricted. The JMS endpoint currently does not support these -restrictions. In the future, more fine-grained security configuration should be available for -both endpoints. - -ifdef::openshift-enterprise[] -== Using the Decision Server xPaaS Image Streams and Application Templates - -The Red Hat xPaaS middleware images were -xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[automatically created during the installation] -of OpenShift along with the other default image streams and templates. -endif::[] - -[[Making-Configuration-Changes-Decision-Server]] -== Running and Configuring the Decision Server xPaaS Image - -You can make changes to the Decision Server configuration in the xPaaS image using either the S2I templates, or by using a modified Decision Server image. - -=== Using the Decision Server xPaaS Image Source-to-Image (S2I) Process - -The recommended method to run and configure the OpenShift Decision Server xPaaS image is to use the OpenShift S2I process together with the application template parameters and environment variables. - -The S2I process for the Decision Server xPaaS image works as follows: - -. If there is a *_pom.xml_* file in the source repository, a Maven build is triggered with the contents of `*$MAVEN_ARGS*` environment variable. -+ -* By default, the `package` goal is used with the `openshift` profile, including the system properties for skipping tests (`*-DskipTests*`) and enabling the Red Hat GA repository (`*-Dcom.redhat.xpaas.repo.redhatga*`). -+ -. The results of a successful Maven build are installed into the local Maven repository, *_/home/jboss/.m2/repository/_*, along with all dependencies for offline usage. The Decision Server xPaaS Image will load the created kjars from this local repository. -+ -* In addition to kjars resulting from the Maven build, any kjars found in the deployments source directory will also be installed into the local Maven repository. Kjars do not end up in the *_EAP_HOME/standalone/deployments/_* directory. -+ -. Any JAR (that is not a kjar), WAR, and EAR in the *_deployments_* source repository directory will be copied to the *_EAP_HOME/standalone/deployments_* directory and subsequently deployed using the JBoss EAP deployment scanner. -+ -. All files in the *_configuration_* source repository directory are copied to *_EAP_HOME/standalone/configuration_*. -+ -[NOTE] -If you want to use a custom JBoss EAP configuration file, it should be named *_standalone-openshift.xml_*. -. All files in the *_modules_* source repository directory are copied to *_EAP_HOME/modules_*. - -=== Using a Modified Decision Server xPaaS Image - -An alternative method is to make changes to the image, and then use that modified image in OpenShift. The templates currently provided, along with the interfaces they support, are listed below: - -.Provided Templates -[options="header"] -|===================================== -| Template Name | Supported Interfaces -| *_decisionserver62-basic-s2i.json_* | http-rest, jms-hornetq -| *_decisionserver62-https-s2i.json_* | http-rest, https-rest, jms-hornetq -| *_decisionserver62-amq-s2i.json_* | http-rest, https-rest, jms-activemq -|===================================== - -You can run the Decision Server xPaaS image in Docker, make the required configuration changes using the JBoss EAP Management CLI (*_EAP_HOME/bin/jboss-cli.sh_*) included in the Decision Server xPaaS image, and then commit the changed container as a new image. You can then use that modified image in OpenShift. - -[IMPORTANT] -It is recommended that you do not replace the OpenShift placeholders in the JBoss EAP xPaaS configuration file, as they are used to automatically configure services (such as messaging, datastores, HTTPS) during a container's deployment. These configuration values are intended to be set using environment variables. - -[NOTE] -Ensure that you follow the xref:../../creating_images/guidelines.adoc#creating-images-guidelines[guidelines for creating images]. - -[[ds-updating-rules]] -=== Updating Rules - -As each image is built from a snapshot of a specific Maven repository, whenever a new rule is added, or an existing rule modified, a new image must be created and deployed for the rule modifications to take effect. - -[[ds-endpoints]] -== Endpoints - -Clients can access the Decision Server xPaaS Image via multiple endpoints; by default the provided templates include support for REST, HornetQ, and ActiveMQ. - -[[ds-rest]] -=== REST - -Clients can use the https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_BRMS/6.2/html-single/User_Guide/index.html#The_REST_API_for_Managing_the_Realtime_Decision_Server[REST API] in various ways: - -[[ds-browser]] -==== Browser - -. Current server state: http://host/kie-server/services/rest/server -. List of containers: http://host/kie-server/services/rest/server/containers -. Specific container state: http://host/kie-server/services/rest/server/containers/HelloRulesContainer - -[[ds-java]] -==== Java - -[source,java] ----- -// HelloRulesClient.java -KieServicesConfiguration config = KieServicesFactory.newRestConfiguration( - "http://host/kie-server/services/rest/server", "kieserverUser", "kieserverPassword"); -config.setMarshallingFormat(MarshallingFormat.XSTREAM); -RuleServicesClient client = - KieServicesFactory.newKieServicesClient(config).getServicesClient(RuleServicesClient.class); -ServiceResponse response = client.executeCommands("HelloRulesContainer", myCommands); ----- - -[[ds-command-line]] -==== Command Line - -[source,bash] ----- -# request.sh -#!/bin/sh -curl -X POST \ - -d @request.xml \ - -H "Accept:application/xml" \ - -H "X-KIE-ContentType:XSTREAM" \ - -H "Content-Type:application/xml" \ - -H "Authorization:Basic a2llc2VydmVyOmtpZXNlcnZlcjEh" \ - -H "X-KIE-ClassType:org.drools.core.command.runtime.BatchExecutionCommandImpl" \ -http://host/kie-server/services/rest/server/containers/instances/HelloRulesContainer ----- - -[source,xml] ----- - - - - - errantepiphany - - - - - ----- - -[[ds-jms]] -=== JMS - -Client can also use the Java Messaging Service, as demonstrated below: - -[[ds-java-hornetq]] -==== Java (HornetQ) - -[source,java] ----- -// HelloRulesClient.java -Properties props = new Properties(); -props.setProperty(Context.INITIAL_CONTEXT_FACTORY, - "org.jboss.naming.remote.client.InitialContextFactory"); -props.setProperty(Context.PROVIDER_URL, "remote://host:4447"); -props.setProperty(Context.SECURITY_PRINCIPAL, "kieserverUser"); -props.setProperty(Context.SECURITY_CREDENTIALS, "kieserverPassword"); -InitialContext context = new InitialContext(props); -KieServicesConfiguration config = - KieServicesFactory.newJMSConfiguration(context, "hornetqUser", "hornetqPassword"); -config.setMarshallingFormat(MarshallingFormat.XSTREAM); -RuleServicesClient client = - KieServicesFactory.newKieServicesClient(config).getServicesClient(RuleServicesClient.class); -ServiceResponse response = client.executeCommands("HelloRulesContainer", myCommands); ----- - -[[ds-java-activemq]] -==== Java (ActiveMQ) - -[source,java] ----- -// HelloRulesClient.java -props.setProperty(Context.INITIAL_CONTEXT_FACTORY, - "org.apache.activemq.jndi.ActiveMQInitialContextFactory"); -props.setProperty(Context.PROVIDER_URL, "tcp://host:61616"); -props.setProperty(Context.SECURITY_PRINCIPAL, "kieserverUser"); -props.setProperty(Context.SECURITY_CREDENTIALS, "kieserverPassword"); -InitialContext context = new InitialContext(props); -ConnectionFactory connectionFactory = (ConnectionFactory)context.lookup("ConnectionFactory"); -Queue requestQueue = (Queue)context.lookup("dynamicQueues/queue/KIE.SERVER.REQUEST"); -Queue responseQueue = (Queue)context.lookup("dynamicQueues/queue/KIE.SERVER.RESPONSE"); -KieServicesConfiguration config = KieServicesFactory.newJMSConfiguration( - connectionFactory, requestQueue, responseQueue, "activemqUser", "activemqPassword"); -config.setMarshallingFormat(MarshallingFormat.XSTREAM); -RuleServicesClient client = - KieServicesFactory.newKieServicesClient(config).getServicesClient(RuleServicesClient.class); -ServiceResponse response = client.executeCommands("HelloRulesContainer", myCommands); ----- - -[[ds-troubleshooting]] -== Troubleshooting - -In addition to viewing the OpenShift logs, you can troubleshoot a running Decision Server xPaaS Image container by viewing its logs. These are outputted to the container's standard out, and are accessible with the following command: - ----- -$ oc logs -f ----- - -[NOTE] -By default, the OpenShift Decision Server xPaaS image does not have a file log handler configured. Logs are only sent to the container's standard out. diff --git a/using_images/xpaas_images/eap.adoc b/using_images/xpaas_images/eap.adoc deleted file mode 100644 index 6b4087f3734e..000000000000 --- a/using_images/xpaas_images/eap.adoc +++ /dev/null @@ -1,188 +0,0 @@ -[[using-images-xpaas-images-eap]] -= Red Hat JBoss Enterprise Application Platform (JBoss EAP) xPaaS Images -{product-author} -{product-version} -:data-uri: -:icons: -:toc: macro -:toc-title: -:description: Set up and use xPaaS JBoss EAP 6.4 and 7 Beta images with OpenShift - -toc::[] - -== Overview - -Red Hat offers a containerized xPaaS image for the Red Hat JBoss Enterprise Application Platform (JBoss EAP) that is designed for use with OpenShift. Using this image, developers can quickly and easily build, scale, and test applications deployed across hybrid environments. - -== Comparing the Product and Image - -The xPaas JBoss EAP images differ from the JBoss EAP product in several ways: - -. The image does not include the JBoss EAP Management Console used to manage xPaaS JBoss EAP images. -. The JBoss EAP Management CLI is included in the xPaaS JBoss EAP image, but can only access the Management CLI of a container from within the pod. -. Domain mode is not supported in the xPaaS JBoss EAP image. Instead, OpenShift manages the creation and distribution of applications in the containers. -. The image’s default root page is disabled. Deploy your own application to the root context as *_ROOT.war_*. -. The EAP 6.4 image supports A-MQ for inter-pod and remote messaging. HornetQ is only supported for intra-pod messaging and only enabled when A-MQ is absent. The EAP 7 Beta image includes Artemis as a replacement for HornetQ. - -For further information about JBoss EAP functionality and features independent from the JBoss EAP image, see the https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/[JBoss EAP documentation] on the Red Hat Customer Portal. - -== Comparing the xPaaS JBoss EAP 6.4 and 7.0 Beta Images - -Red Hat offers two xPaaS EAP images for use with OpenShift. The first is based on JBoss EAP 6.4 and the second is based on JBoss EAP 7 Beta. There are several differences between the two images: - -*JBoss Web is replaced by Undertow* - -* The xPaaS JBoss EAP 6.4 image uses JBoss Web. - -* The xPaas JBoss EAP 7 Beta image uses Undertow instead of JBoss Web. This change only affects users implementing custom JBoss Web Valves in their applications. Affected users must refer to the Red Hat JBoss EAP 7 Beta documentation for details about https://access.redhat.com/documentation/en/red-hat-jboss-enterprise-application-platform/version-7.0.beta/migration-guide/#migrate_custom_valves[migrating JBoss EAP Web Valve handlers]. - -*HornetQ is replaced by Artemis* - -* The EAP 6.4 image only uses HornetQ for intra-pod messaging when A-MQ is absent. - -* The EAP 7 Beta image uses Artemis instead of HornetQ. This change resulted in renaming the `*HORNETQ_QUEUES*` and `*HORNETQ_TOPICS*` environment variables to `MQ_QUEUES` and `MQ_TOPICS` respectively. For complete instructions to deal with migrating applications from JBoss EAP 6.4 to 7 Beta, see the https://access.redhat.com/documentation/en/red-hat-jboss-enterprise-application-platform/7.0.beta/migration-guide/migration-guide[JBoss EAP 7 Beta Migration Guide]. - -== Compatibility with xPaaS JBoss EAP - -See the xPaaS section of the https://access.redhat.com/articles/2176281[OpenShift and Atomic Platform Tested Integrations page] for details about OpenShift EAP image version compatibility. - -== Setting Up the xPaaS JBoss EAP Image - -The following is a list of prerequisites for using the xPaaS JBoss EAP images: - -. *Acquire Red Hat Subscriptions* - Ensure that you have the relevant subscriptions for OpenShift as well as a subscription for xPaaS Middleware. - -. *Install OpenShift* - Before using the xPaaS JBoss EAP images, you must have an -OpenShift environment installed and configured. See the -xref:../../install/index.adoc#install-planning[Installing Clusters] guide for -steps on using Ansible to install in production environments. - -. *Install and Deploy Container Image Registry* - Install the container image registry and then ensure that the container image registry is deployed to locally manage images as follows: -+ ----- -$ oc adm registry --config=/etc/origin/master/admin.kubeconfig ----- -+ -For further information, see xref:../../install_config/registry/index.adoc#install-config-registry-overview[Deploying a Container Image Registry] - -. *Deploy a Router* - Use the instructions at the xref:../../install_config/router/index.adoc#install-config-router-overview[Deploying a Router] page for this step. - -. *Privileges* - Ensure that you can run the `oc create` command with xref:../../architecture/additional_concepts/authorization.adoc#roles[cluster-admin] privileges. - -. *Create Image Streams* - Image streams are configured during the Quick or Advanced OpenShift Installation. If required, manually create the image streams for both versions of the xPaaS JBoss EAP image as follows: -+ ----- -$ oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v1.1/xpaas-streams/jboss-image-streams.json -n openshift ----- -+ -[NOTE] -==== -For further information about creating image streams, see xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[Loading the Default Image Streams and Templates] -==== - -. *Create Instant App Templates* - Instant App templates define a full set of objects for running applications and are configured during the Quick or Advanced OpenShift Installation. If required, create Instant App templates as follows: - -.. Create the core Instant App templates: -+ ----- -$ oc create -f \ openshift-ansible/roles/openshift_examples/files/examples/quickstart-templates -n openshift ----- -+ - -.. Register Instant App templates for xPaaS Middleware products: -+ ----- -$ oc create -f \ openshift-ansible/roles/openshift_examples/files/examples/xPaaS-templates -n openshift ----- -+ - - -== Modifying the JDK Used by the xPaaS JBoss EAP Image - -The xPaaS JBoss EAP 6.4 image includes OpenJDK 1.7 and 1.8, with OpenJDK 1.8 as the default. The xPaaS JBoss EAP 7 Beta image only includes and supports OpenJDK 1.8. - -To change the JDK version used by the xPaaS JBoss EAP 6.4 image: - -. Ensure that the *_pom.xml_* file specifies that the code must be built using the intended JDK version. - -. In the S2I application template, configure the image’s `*JAVA_HOME*` environment variable to point to the intended JDK version. For example: -+ -.Setting the JDK version -==== -Change the defined value to point to the required version of the JDK. ----- -name: "JAVA_HOME" -value: "/usr/lib/jvm/java-1.7.0" ----- -==== -+ - - -== Getting Started Using xPaaS JBoss EAP Images - -=== Configuring the xPaaS JBoss EAP Images - -You can change the configuration for the xPaaS JBoss EAP images by either using the S2I (Source to Image) templates, or by using a modified xPaaS JBoss EAP image. Red Hat recommends using the S2I method to configure the xPaaS JBoss EAP image. - -=== Configuring the xPaaS JBoss EAP Image using the S2I Templates - -The recommended method to run and configure the xPaaS JBoss EAP image is to use the OpenShift S2I process together with the application template parameters and environment variables. - -[NOTE] -==== -The variable `*EAP_HOME*` is used to denote the path to the JBoss EAP installation. Replace this variable with the actual path to your JBoss EAP installation. -==== - -The S2I process for the xPaaS JBoss EAP image works as follows: - -. If a *_pom.xml_* file is present in the source repository, a Maven build using the contents of the `*$MAVEN_ARGS*` environment variable is triggered. By default, the OpenShift profile uses the Maven package goal which includes system properties for skipping tests (`*-DskipTests*`) and enabling the Red Hat GA repository (`*-Dcom.redhat.xPaaS.repo.redhatga*`). The results of a successful Maven build are copied to `*EAP_HOME/standalone/deployments*`. This includes all JAR, WAR, and EAR files from the source repository specified by the `*$ARTIFACT_DIR*` environment variable. The default value of `*$ARTIFACT_DIR*` is the target directory. - -. Any JAR, WAR, and EAR in the deployment's source repository directory are copied to the *_EAP_HOME/standalone/deployments_* directory. - -. All files in the configuration source repository directory are copied to *_EAP_HOME/standalone/configuration_*. If you want to use a custom JBoss EAP configuration file, it should be named *_standalone-openshift.xml_*. - -. All files in the modules source repository directory are copied to *_EAP_HOME/modules_*. - -[[using-a-modified-jboss-eap-xpaas-image]] -=== Using a Modified xPaaS JBoss EAP Image - -You can make changes to an image or create a custom image to use in OpenShift. - -The JBoss EAP configuration file used by OpenShift in the xPaaS JBoss EAP image is *_EAP_HOME/standalone/configuration/standalone-openshift.xml_*. The script to start JBoss EAP is *_EAP_HOME/bin/openshift-launch.sh_*. - -[IMPORTANT] -==== -Ensure that you have read the xref:../../creating_images/guidelines.adoc#creating-images-guidelines[guidelines for creating images] and follow them when creating a modified image. -==== - -To use a modified image in OpenShift: - -[WARNING] -==== -This procedure results in losing configuration placeholders for various settings such as datasources, messaging, HTTPS, KeyCloak, etc. A workaround for this issue is to create a duplicate copy of the *_standalone.xml_* file to edit. The original and edited versions can be compared after all edits are complete and placeholder values can be copied to the edited version from the original version to retain these values. -==== - -. Run the xPaaS JBoss EAP image using Docker. - -. Make the required changes using the JBoss EAP Management CLI by running the script at *_EAP_HOME/bin/jboss-cli.sh_*. - -. Commit the changed container as a new image and then use the modified image in OpenShift. - -=== Troubleshooting - -If an application is not starting, use the following command to view details to locate and troubleshoot the problem: - ----- -$ oc describe po ----- - -To troubleshoot running xPaaS JBoss EAP containers, you can either view the OpenShift logs, or view the JBoss EAP logs displayed to the container’s console. Use the following command to view the JBoss EAP logs: - ----- -$ oc logs -f ----- - -[NOTE] -==== -By default, the xPaaS JBoss EAP image does not have a file log handler configured. Logs are therefore only sent to the console. -==== diff --git a/using_images/xpaas_images/eap_old.adoc b/using_images/xpaas_images/eap_old.adoc deleted file mode 100644 index 4d8f7237c388..000000000000 --- a/using_images/xpaas_images/eap_old.adoc +++ /dev/null @@ -1,149 +0,0 @@ -[[using-images-xpaas-images-eap-old]] -= Red Hat JBoss Enterprise Application Platform (JBoss EAP) xPaaS Image -{product-author} -{product-version} -:data-uri: -:icons: -:experimental: -:toc: macro -:toc-title: - -toc::[] - -== Overview - -Red Hat JBoss Enterprise Application Platform (JBoss EAP) is available as a containerized xPaaS image that is designed for use with OpenShift. Developers can quickly build, scale, and test applications deployed across hybrid environments. - -[IMPORTANT] -There are significant differences in supported configurations and functionality -in the JBoss EAP xPaaS image compared to the regular release of JBoss EAP. - -This topic details the differences between the JBoss EAP xPaaS image and the -regular release of JBoss EAP, and provides instructions specific to running and -configuring the JBoss EAP xPaaS image. Documentation for other JBoss EAP -functionality not specific to the JBoss EAP xPaaS image can be found in the -https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/[JBoss -EAP documentation on the Red Hat Customer Portal]. - -`_EAP_HOME_` in this documentation, as in the -https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/[JBoss -EAP documentation], is used refer to the JBoss EAP installation directory. The -location of `_EAP_HOME_` inside a JBoss EAP xPaaS image is *_/opt/eap/_*, which -the `*JBOSS_HOME*` environment variable is set to by default. - -== Comparing the JBoss EAP xPaaS Image to the Regular Release of JBoss EAP - -=== Functionality Differences for OpenShift JBoss EAP xPaaS Images - -There are several major functionality differences in the OpenShift JBoss EAP xPaaS image: - -* The JBoss EAP Management Console is not available xref:Managing-OpenShift-JBoss-EAP-xPaaS-Images[to manage OpenShift JBoss EAP xPaaS images]. -* The JBoss EAP Management CLI is only bound locally. This means that you can only access the Management CLI of a container from within the pod. -* Domain mode is not supported. OpenShift controls the creation and distribution of applications in the containers. -* The default root page is disabled. You may want to deploy your own application to the root context (as *_ROOT.war_*). -* A-MQ is supported for inter-pod and remote messaging. HornetQ is only supported for intra-Pod messaging and only enabled in the absence of A-MQ. - -[[Managing-OpenShift-JBoss-EAP-xPaaS-Images]] -=== Managing OpenShift JBoss EAP xPaaS Images - -A major difference in managing an OpenShift JBoss EAP xPaaS image is that there is no Management Console exposed for the JBoss EAP installation inside the image. Because images are intended to be immutable, with modifications being written to a non-persistent file system, the Management Console is not exposed. - -However, the JBoss EAP Management CLI (*_EAP_HOME/bin/jboss-cli.sh_*) is still -accessible from within the container for troubleshooting purposes. First open a -remote shell session to the running pod: - ----- -$ oc rsh ----- - -Then run the following from the remote shell session to launch the JBoss EAP -Management CLI: - ----- -$ /opt/eap/bin/jboss-cli.sh ----- - -[WARNING] -Any configuration changes made using the JBoss EAP Management CLI on a running container will be lost when the container restarts. - -xref:Making-Configuration-Changes-EAP[Making configuration changes to the -JBoss EAP instance inside the JBoss EAP xPaaS image] is different from the process you may be used to for a regular release of JBoss EAP. - -=== Unsupported Configurations - -The following is a list of unsupported configurations specific to the JBoss EAP xPaaS image: - -* Using MySQL in a scaled environment with XA distributed transactions is not recommended. For applications that support both scaling and XA distributed transactions, PostgreSQL is recommended instead. -// This is based on https://issues.jboss.org/browse/CLOUD-56 - -ifdef::openshift-enterprise[] -== Using the JBoss EAP xPaaS Image Streams and Application Templates - -The Red Hat xPaaS middleware images were -xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[automatically created during the installation] -of OpenShift along with the other default image streams and templates. -endif::[] - -[[Making-Configuration-Changes-EAP]] -== Running and Configuring the JBoss EAP xPaaS Image - -You can make changes to the JBoss EAP configuration in the xPaaS image using either the S2I templates, or by using a modified JBoss EAP xPaaS image. - -=== Using the JBoss EAP xPaaS Image Source-to-Image (S2I) Process - -The recommended method to run and configure the OpenShift JBoss EAP xPaaS image is to use the OpenShift S2I process together with the application template parameters and environment variables. - -The S2I process for the JBoss EAP xPaaS image works as follows: - -. If there is a *_pom.xml_* file in the source repository, a Maven build is triggered with the contents of `*$MAVEN_ARGS*` environment variable. -+ -By default the `package` goal is used with the `openshift` profile, including the system properties for skipping tests (`*-DskipTests*`) and enabling the Red Hat GA repository (`*-Dcom.redhat.xpaas.repo.redhatga*`). -+ -The results of a successful Maven build are copied to *_EAP_HOME/standalone/deployments_*. This includes all JAR, WAR, and EAR files from the directory within the source repository specified by `*$ARTIFACT_DIR*` environment variable. The default value of `*$ARTIFACT_DIR*` is the *_target_* directory. -. Any JAR, WAR, and EAR in the *_deployments_* source repository directory are copied to the *_EAP_HOME/standalone/deployments_* directory. -. All files in the *_configuration_* source repository directory are copied to *_EAP_HOME/standalone/configuration_*. -+ -[NOTE] -If you want to use a custom JBoss EAP configuration file, it should be named *_standalone-openshift.xml_*. -. All files in the *_modules_* source repository directory are copied to *_EAP_HOME/modules_*. - -==== Using a Different JDK Version in the JBoss EAP xPaaS Image - -The JBoss EAP xPaaS image may come with multiple versions of OpenJDK installed, but only one is the default. For example, the JBoss EAP 6.4 xPaaS image comes with OpenJDK 1.7 and 1.8 installed, but OpenJDK 1.8 is the default. - -If you want the JBoss EAP xPaaS image to use a different JDK version than the default, you must: - -* Ensure that your *_pom.xml_* specifies to build your code using the intended JDK version. -* In the S2I application template, configure the image's `*JAVA_HOME*` environment variable to point to the intended JDK version. For example: -+ ----- -{ - "name": "JAVA_HOME", - "value": "/usr/lib/jvm/java-1.7.0" -} ----- - -=== Using a Modified JBoss EAP xPaaS Image - -An alternative method is to make changes to the image, and then use that modified image in OpenShift. - -The JBoss EAP configuration file that OpenShift uses inside the JBoss EAP xPaaS image is *_EAP_HOME/standalone/configuration/standalone-openshift.xml_*, and the JBoss EAP startup script is *_EAP_HOME/bin/openshift-launch.sh_*. - -You can run the JBoss EAP xPaaS image in Docker, make the required configuration changes using the JBoss EAP Management CLI (*_EAP_HOME/bin/jboss-cli.sh_*), and then commit the changed container as a new image. You can then use that modified image in OpenShift. - -[IMPORTANT] -It is recommended that you do not replace the OpenShift placeholders in the JBoss EAP xPaaS configuration file, as they are used to automatically configure services (such as messaging, datastores, HTTPS) during a container's deployment. These configuration values are intended to be set using environment variables. - -[NOTE] -Ensure that you follow the xref:../../creating_images/guidelines.adoc#creating-images-guidelines[guidelines for creating images]. - -== Troubleshooting - -In addition to viewing the OpenShift logs, you can troubleshoot a running JBoss EAP container by viewing the JBoss EAP logs that are outputted to the container's console: - ----- -$ oc logs -f ----- - -[NOTE] -By default, the OpenShift JBoss EAP xPaaS image does not have a file log handler configured. Logs are only sent to the console. diff --git a/using_images/xpaas_images/fuse.adoc b/using_images/xpaas_images/fuse.adoc deleted file mode 100644 index 323241d5c7c9..000000000000 --- a/using_images/xpaas_images/fuse.adoc +++ /dev/null @@ -1,290 +0,0 @@ -[[using-images-xpaas-images-fuse]] -= Red Hat JBoss Fuse Integration Services -{product-author} -{product-version} -:data-uri: -:icons: -:experimental: -:toc: macro -:toc-title: -:prewrap!: - -toc::[] - -== Overview -Red Hat JBoss Fuse Integration Services provides a set of tools and containerized xPaaS images that enable development, deployment, and management of integration microservices within OpenShift. - -[IMPORTANT] -==== -There are significant differences in supported configurations and functionality in Fuse Integration Services compared to the standalone JBoss Fuse product. -==== - -=== Differences Between Fuse Integration Services and JBoss Fuse -There are several major functionality differences: - -* Fuse Management Console is not included as Fuse administration views have been integrated directly within the OpenShift Web Console. -* An application deployment with Fuse Integration Services consists of an application and all required runtime components packaged inside a container image. Applications are not deployed to a runtime as with Fuse, the application image itself is a complete runtime environment deployed and managed through OpenShift. -* Patching in an OpenShift environment is different from standalone Fuse since each application image is a complete runtime environment. To apply a patch, the application image is rebuilt and redeployed within OpenShift. Core OpenShift management capabilities allow for rolling upgrades and side-by-side deployment to maintain availability of your application during upgrade. -* Provisioning and clustering capabilities provided by Fabric in Fuse have been replaced with equivalent functionality in Kubernetes and OpenShift. There is no need to create or configure individual child containers as OpenShift automatically does this for you as part of deploying and scaling your application. -* Messaging services are created and managed using the A-MQ xPaaS images for OpenShift and not included directly within Fuse. Fuse Integration Services provides an enhanced version of the camel-amq component to allow for seamless connectivity to messaging services in OpenShift through Kubernetes. -* Live updates to running Karaf instances using the Karaf shell is strongly discouraged as updates will not be preserved if an application container is restarted or scaled up. This is a fundamental tenet of immutable architecture and essential to achieving scalability and flexibility within OpenShift. - -Additional details on technical differences and support scope are documented in an associated https://access.redhat.com/articles/2112371[KCS article]. - -== Using Fuse Integration Services -You can start using Fuse Integration Services by creating an application and deploying it to OpenShift using one of the following application development workflows: - -* Fabric8 Maven Workflow -* OpenShift Source-to-Image (S2I) Workflow - -Both workflows begin with creating a new project from a Maven archetype. - -=== Maven Archetypes Catalog -The Maven Archetype catalog includes the following examples: - -|=== - -| cdi-camel-http-archetype | Creates a new Camel route using CDI in a standalone Java Container calling the remote camel-servlet quickstart - -| cdi-cxf-archetype | Creates a new CXF JAX-RS using CDI running in a standalone Java Container - -| cdi-camel-archetype | Creates a new Camel route using CDI in a standalone Java Container - -| cdi-camel-jetty-archetype | Creates a new Camel route using CDI in a standalone Java Container using Jetty as HTTP server - -| java-simple-mainclass-archetype | Creates a new Simple standalone Java Container (main class) - -| java-camel-spring-archetype | Creates a new Camel route using Spring XML in a standalone Java container - -| karaf-cxf-rest-archetype | Creates a new RESTful WebService Example using JAX-RS - -| karaf-camel-rest-sql-archetype | Creates a new Camel Example using Rest DSL with SQL Database - -| karaf-camel-log-archetype | Creates a new Camel Log Example - -|=== - -Begin by selecting the archetype which matches the type of application you would like to create. - -[[fuse-create-an-application-from-the-maven-archetype-catalog]] -=== Create an Application from the Maven Archetype Catalog - -You must configure the Maven repositories, which hold the archetypes and artifacts you may need, before creating a sample project: - -* JBoss Fuse repository: `https://repo.fusesource.com/nexus/content/groups/public/` -* RedHat GA repository: `https://maven.repository.redhat.com/ga` - -Use the maven archetype catalog to create a sample project with the required resources. The command to create a sample project is: - ----- -$ mvn archetype:generate \ - -DarchetypeCatalog=https://repo.fusesource.com/nexus/content/groups/public/archetype-catalog.xml \ - -DarchetypeGroupId=io.fabric8.archetypes \ - -DarchetypeVersion=2.2.0.redhat-079 \ - -DarchetypeArtifactId= ----- - -[NOTE] -==== -Replace with the name of the archetype that you want to use. For example, *karaf-camel-log-archetype* creates a new Camel log example. -==== - -This will create a maven project with all required dependencies. Maven properties and plug-ins that are used to create container images are added to the *_pom.xml_* file. - -=== Fabric8 Maven Workflow -Creates a new project based off a Maven application template created through Archetype catalog. This catalog provides examples of Java and Karaf projects and supports S2I and Maven deployment workflows. - -. Set the following environment variables to communicate with OpenShift and a Docker daemon: - -+ -|=== - -| DOCKER_HOST | Specifies the connection to a Docker daemon used to build an application container image | `tcp://10.1.2.2:2375` - -| KUBERNETES_MASTER | Specifies the URL for contacting the OpenShift API server | `https://10.1.2.2:8443` - -| KUBERNETES_DOMAIN | Domain used for creating routes. Your OpenShift API server must be mapped to all hosts of this domain. | `openshift.dev` - -|=== -+ - -. Login to OpenShift using CLI and select the project to which to deploy. - -+ ----- -$ oc login - -$ oc project ----- - -. Create a sample project as described in xref:fuse-create-an-application-from-the-maven-archetype-catalog[Create an Application from the Maven Archetype Catalog]. - -. Build and push the project to OpenShift. You can use following maven goals for building and pushing container images. - -+ -|=== - -| docker:build | Builds the container image for your maven project. - -| docker:push | Pushes the locally built container image to the global or a local container image registry. This step is optional when developing on a single node OpenShift cluster. - -| fabric8:json | Generates kubernetes json file for your maven project. This goal is bound to the `package` phase and doesn't need to be called explicitly when running `mvn install` - -| fabric8:apply | Applies the kubernetes json file to the current Kubernetes environment and namespace. - -|=== -+ - -There are few pre-configured maven profiles that you can use to build the project. These profiles are combinations of above maven goals that simplify the build process. - -+ -|=== - -| mvn -Pf8-build | Comprises of `clean`, `install`, `docker:build`, and `fabric8:json`. This will build dockerfile and JSON template for a project. - -| mvn -Pf8-local-deploy | Comprises of `clean`, `install`, `docker:build`, `fabric8:json`, and `fabric8:apply`. This will create docker and JSON templates and then apply them to OpenShift. - -| mvn -Pf8-deploy: | Comprises of `clean`, `docker:build`, `fabric8:json`, `docker:push`, and `fabric8:apply`. This will create docker and JSON templates, push them to container image registry and apply to OpenShift. - -|=== -+ -In this example, we will build it locally by running the command: -+ ----- -$ mvn -Pf8-local-deploy ----- - -. Login to OpenShift Web Console. A pod is created for the newly created application. You can view the status of this pod, deployments and services that the application is creating. - -==== Authenticating Against a Registry -For multi node OpenShift setups, the image created must be pushed to the OpenShift registry. This registry must be reachable from the outside through a route. Authentication against this registry reuses the OpenShift authentication with `oc login`. Assuming that your OpenShift registry is exposed as `registry.openshift.dev:80`, the project image can be deployed to the registry with following command: - ----- -$ mvn docker:push -Ddocker.registry=registry.openshift.dev:80 \ - -Ddocker.username=$(oc whoami) \ - -Ddocker.password=$(oc whoami -t) ----- - -To push changes to the registry, the OpenShift project must exist and the users of container image must be connected to the OpenShift project. All the examples uses the property `fabric8.dockerUser` as container image user which has `fabric8/` as default (note the trailing slash). When this user is used unaltered an OpenShift project 'fabric8' must exist. This can be created with 'oc new-project fabric8'. - -[[fuse-plug-in-configuration]] -==== Plug-in Configuration -Plug-ins `docker-maven-plugin` and `fabric8-maven-plugin` are responsible for creating container images and OpenShift API objects which can be configured flexibly. The examples from the archetypes introduces some extra properties which can be changed when running Maven: - -|=== - -| docker.registry | Registry to use for `docker:push` and `-Pf8-deploy` - -| docker.username | Username for authentication against the registry - -| docker.password | Password for authentication against the registry - -| docker.from | Base image for the application container image - -| fabric8.dockerUser | User used in the image's name as user part. It must contain a `/` as trailing part. The default value is `fabric8/`. - -| docker.image | The final container image name. Default value is `${fabric8.dockerUser}${project.artifactId}:${project.version}` - -|=== - -[[fuse-using-application-templates]] -=== OpenShift Source-to-Image (S2I) Workflow -Applications are created through OpenShift Admin Console and CLI using application templates. If you have a JSON or YAML file that defines a template, you can upload the template to the project using the CLI. This saves the template to the project for repeated use by users with appropriate access to that project. You can add the remote Git repository location to the template using template parameters. This allows you to pull the application source from remote repository and built using source-to-image (S2I) method. - -JBoss Fuse Integration Services application templates depend on S2I builder `*ImageStreams*`, which MUST be created ONCE. The OpenShift installer creates them automatically. For users existing OpenShift setups, it can be achieved with the following command: - ----- -$ oc create -n openshift -f /usr/share/openshift/examples/xpaas-streams/fis-image-streams.json ----- - -The `*ImageStreams*` may be created in a namespace other than *openshift* by changing it in the command and corresponding template parameter `*IMAGE_STREAM_NAMESPACE*` when creating applications. - -==== Create an Application Using Templates - -. Create an application template using command `*mvn archetype:generate*`. To create an application, upload the template to your current project’s template library with the following command: - -+ ----- -$ oc create -f quickstart-template.json -n ----- -+ - -The template is now available for selection using the web console or the CLI. - -. Login to OpenShift Web Console. In the desired project, click *Add to Project* to create the objects from an uploaded template. - -. Select the template from the list of templates in your project or from the global template library. - -. Edit template parameters and then click *Create*. For example, template parameters for a camel-spring quickstart are: - -+ -|=== -| Parameter | Description | Default - -| APP_NAME | Application Name | Artifact name of the project - -| GIT_REPO | Git repository, required | - -| GIT_REF | Git ref to build | `master` - -| SERVICE_NAME | Exposed Service name | - -| BUILDER_VERSION | Builder version | 1.0 - -| APP_VERSION | Application version | Maven project version - -| MAVEN_ARGS | Arguments passed to mvn in the build | `package -DskipTests -e` - -| MAVEN_ARGS_APPEND | Extra arguments passed to mvn, e.g. for multi-module builds use `-pl groupId:module-artifactId -am` | - -| ARTIFACT_DIR | Maven build directory | `target/` - -| IMAGE_STREAM_NAMESPACE | Namespace in which the JBoss Fuse ImageStreams are installed. | - -| BUILD_SECRET | generated if empty. The secret needed to trigger a build. | - -|=== - -. After successful creation of the application, you can view the status of application by clicking *Pods* tab or by running the following command: -+ ----- -$ oc get pods ----- - -For more information, see xref:../../dev_guide/templates.adoc#dev-guide-templates[Application -Templates]. - -[[fuse-developing-applications]] -=== Developing Applications - -==== Injecting Kubernetes Services into Applications - -You can inject Kubernetes services into applications by labeling the pods and use those labels to select the required pods to provide a logical service. These labels are simple key, value pairs. - -[[fuse-cdi-injection]] -===== CDI Injection - -Fabric8 provides a CDI extension that you can use to inject Kubernetes resources into your applications. To use the CDI extension, first add the dependency to the project's *_pom.xml_* file. - ----- - - io.fabric8 - fabric8-cdi - {$fabric8.version} - ----- - -Next step is to identify the field that requires the service and then inject the service by adding a `*@ServiceName*` annotation to it. For example, - ----- -@Inject -@ServiceName("my-service") -private String service. ----- - -The `*@PortName*` annotation is used to select a specific port by name when multiple ports are defined for a service. - -[[fuse-using-environment-variables-as-properties]] -===== Using Environment Variables as Properties - -You can use to access a service by using environment variables to expose the fixed IP address and port. These are, `*SERVICE_HOST*` and `*SERVICE_PORT*`. `*SERVICE_HOST*` is the host (IP) address of the service and `*SERVICE_PORT*` is the port of the service. diff --git a/using_images/xpaas_images/index.adoc b/using_images/xpaas_images/index.adoc deleted file mode 100644 index 367ad3fe1e08..000000000000 --- a/using_images/xpaas_images/index.adoc +++ /dev/null @@ -1,9 +0,0 @@ -[[using-images-xpaas-images-index]] -= Overview -{product-author} -{product-version} -:data-uri: -:icons: -:experimental: - -Red Hat offers a containerized xPaaS image for a host of middleware products that are designed for use with {product-title}. The documentation for these images is in the link:https://access.redhat.com/documentation/en/red-hat-jboss-middleware-for-openshift/[Red Hat Customer Portal]. diff --git a/using_images/xpaas_images/jws.adoc b/using_images/xpaas_images/jws.adoc deleted file mode 100644 index 043b4d76f031..000000000000 --- a/using_images/xpaas_images/jws.adoc +++ /dev/null @@ -1,64 +0,0 @@ -[[using-images-xpaas-images-jws]] -= Red Hat JBoss Web Server xPaaS Images -{product-author} -{product-version} -:data-uri: -:icons: -:experimental: -:toc: macro -:toc-title: -d -toc::[] - -== Overview - -The Apache Tomcat 7 and Apache Tomcat 8 components of Red Hat JBoss Web Server 3 are available as containerized xPaaS images that are designed for use with OpenShift. Developers can use these images to quickly build, scale, and test Java web applications deployed across hybrid environments. - -[IMPORTANT] -There are significant differences in the functionality between the JBoss Web Server xPaaS images and the regular release of JBoss Web Server. - -This topic details the differences between the JBoss Web Server xPaaS images and the regular release of JBoss Web Server, and provides instructions specific to running and configuring the JBoss Web Server xPaaS images. Documentation for other JBoss Web Server functionality not specific to the JBoss Web Server xPaaS images can be found in the https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Web_Server/[JBoss Web Server documentation on the Red Hat Customer Portal]. - -The location of `_JWS_HOME_/tomcat/` inside a JBoss Web Server xPaaS image is: *_/opt/webserver/_*. - -== Functionality Differences in the OpenShift JBoss Web Server xPaaS Images - -A major functionality difference compared to the regular release of JBoss Web Server is that there is no Apache HTTP Server in the OpenShift JBoss Web Server xPaaS images. All load balancing in OpenShift is handled by the OpenShift router, so there is no need for a load-balancing Apache HTTP Server with mod_cluster or mod_jk connectors. - -ifdef::openshift-enterprise[] -== Using the JBoss Web Server xPaaS Image Streams and Application Templates - -The Red Hat xPaaS middleware images were -xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[automatically created during the installation] -of OpenShift along with the other default image streams and templates. - -[NOTE] -The JBoss Web Server xPaaS application templates are distributed as two sets: one set for Tomcat 7, and another for Tomcat 8. -endif::[] - -== Using the JBoss Web Server xPaaS Image Source-to-Image (S2I) Process - -To run and configure the OpenShift JBoss Web Server xPaaS images, use the OpenShift S2I process with the application template parameters and environment variables. - -The S2I process for the JBoss Web Server xPaaS images works as follows: - -. If there is a *_pom.xml_* file in the source repository, a Maven build is triggered with the contents of `*$MAVEN_ARGS*` environment variable. -+ -By default the `package` goal is used with the `openshift` profile, including the system properties for skipping tests (`*-DskipTests*`) and enabling the Red Hat GA repository (`*-Dcom.redhat.xpaas.repo.redhatga*`). -+ -The results of a successful Maven build are copied to *_/opt/webserver/webapps_*. This includes all WAR files from the source repository directory specified by the `*$ARTIFACT_DIR*` environment variable. The default value of `*$ARTIFACT_DIR*` is the *_target_* directory. -. All WAR files from the *_deployments_* source repository directory are copied to *_/opt/webserver/webapps_*. -. All files in the *_configuration_* source repository directory are copied to *_/opt/webserver/conf_*. -+ -[NOTE] -If you want to use custom Tomcat configuration files, the file names should be the same as for a normal Tomcat installation. For example, *_context.xml_* and *_server.xml_*. - -== Troubleshooting - -In addition to viewing the OpenShift logs, you can troubleshoot a running JBoss Web Server container by viewing the logs that are outputted to the container's console: - ----- -$ oc logs -f ----- - -Additionally, access logs are written to *_/opt/webserver/logs/_*. diff --git a/using_images/xpaas_images/sso.adoc b/using_images/xpaas_images/sso.adoc deleted file mode 100644 index 1817b8746fd5..000000000000 --- a/using_images/xpaas_images/sso.adoc +++ /dev/null @@ -1,191 +0,0 @@ -[[using-images-xpaas-images-sso]] -= Red Hat Single Sign-On (SSO) xPaaS Image -{product-author} -{product-version} -:data-uri: -:icons: -:experimental: -:toc: macro -:toc-title: - -toc::[] - -[IMPORTANT] -==== -This image is currently in https://access.redhat.com/support/offerings/techpreview[Technical Preview] and not intended for production use. -==== - -== Overview - -Red Hat Single Sign-On (SSO) is an integrated sign-on solution available as a containerized xPaaS image designed for use with OpenShift. This image provides an authentication server for users to centrally log in, log out, register, and manage user accounts for web applications, mobile applications, and RESTful web services. - -Red Hat offers five SSO application templates: - -* *_sso70-basic_*: SSO backed by a H2 database on the same pod -* *_sso70-mysql_*: SSO backed by a MySQL database on a separate pod -* *_sso70-mysql-persistent_*: SSO backed by a persistent PostgreSQL database on a separate pod -* *_sso70-postgresql_*: SSO backed by a MySQL database on a separate pod -* *_sso70-postgresql-persistent_*: SSO backed by a persistent PostgreSQL database on a separate pod - -An SSO-enabled Red Hat JBoss Enterprise Application Platform (JBoss EAP) Image is also offered, which enables users to deploy a JBoss EAP instance that can be used with SSO for authentication: - -* *_eap64-sso-s2i_*: SSO-enabled JBoss EAP - -== Differences Between the SSO xPaaS Application and Keycloak -The SSO xPaaS application is based on Keycloak, a JBoss community project. There are some differences in functionality between the Red Hat Single Sign-On xPaaS Application and Keycloak: - -* This image is currently available as a Technical Preview for use only with SSO-enabled Red Hat JBoss Enterprise Application Platform (JBoss EAP) applications. -* The SSO xPaaS Technical Preview Application includes all of the functionality of Keycloak 1.8.1. In addition, the SSO-enabled JBoss EAP image automatically handles OpenID Connect or SAML client registration and configuration for *_.war_* deployments that contain *KEYCLOAK* or *KEYCLOAK-SAML* in their respective *web.xml* files. - -== Versioning for xPaaS Images -See the xPaaS part of the https://access.redhat.com/articles/2176281[OpenShift and Atomic Platform Tested Integrations page] for details about OpenShift image version compatibility. - -== Prerequisites for Deploying the SSO xPaaS Image -The following is a list of prerequisites for using the SSO xPaaS image: - -. *Acquire Red Hat Subscriptions*: Ensure that you have the relevant OpenShift subscriptions as well as a subscription for xPaaS Middleware. -. *Install OpenShift*: Before using the OpenShift xPaaS images, you must have an -OpenShift environment installed and configured. See the -xref:../../install/index.adoc#install-planning[Installing Clusters] guide for -steps on using Ansible to install in production environments. -. Ensure the *DNS* has been configured. This is required for communication between JBoss EAP and SSO, and for the requisite redirection. See xref:../../install/prerequisites.adoc#prereq-dns[DNS] for more information. -. *Install and Deploy Container Image Registry*: Install the container image registry and then ensure that the container image registry is deployed to locally manage images: -+ ----- -$ oc adm registry --config=/etc/origin/master/admin.kubeconfig ----- -+ -For more information, see xref:../../install_config/registry/index.adoc#install-config-registry-overview[Deploying a Container Image Registry] -. *Deploy a Router*. For more information, see xref:../../install_config/router/index.adoc#install-config-router-overview[Deploying a Router]. -. Ensure that you can run the `oc create` command with xref:../../architecture/additional_concepts/authorization.adoc#roles[cluster-admin] privileges. - -== Using the SSO Image Streams and Application Templates -The Red Hat xPaaS middleware images were -xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[automatically created during the installation] -of OpenShift along with the other default image streams and templates. - -== Preparing and Deploying the SSO xPaaS Application Templates -=== Using the OpenShift CLI - -. Prepare the JBoss EAP and SSO application service accounts and their associated secrets. -+ ----- -$ oc create -n -f /secrets/eap-app-secret.json ----- -+ ----- -$ oc create -n -f /secrets/sso-app-secret.json ----- -. Deploy one of the SSO application templates. This example deploys the *_sso70-postgresql_* template, which deploys an SSO pod backed by a PostgreSQL database on a separate pod. -+ ----- -$ oc process -f /sso/sso70-postgresql.json | oc create -n -f - ----- -+ -Or, if the template has been imported into common namespace: -+ ----- -$ oc new-app --template=sso70-postgresql -n ----- - -=== Using the OpenShift Web Console -Log in to the OpenShift web console: - -. Click *Add to project* to list all of the default image streams and templates. -. Use the *Filter by keyword* search bar to limit the list to those that match _sso_. You may need to click *See all* to show the desired application template. -. Click an application template to list all of the deployment parameters. These parameters can be configured manually, or can be left as default. -. Click *Create* to deploy the application template. - -=== Deployment Process -Once deployed, two pods will be created: one for the SSO web servers and one for the database. After the SSO web server pod has started, the web servers can be accessed at their custom configured hostnames, or at the default hostnames: - -* _http://sso-./auth_: for the web server, and -* _https://secure-sso-./auth_: for the encrypted web server. - -The default login username/password credentials are _admin_/_admin_. - -== Quickstart Example: Using the SSO xPaaS Image with the SSO-enabled JBoss EAP xPaaS Image -This example uses the OpenShift web console to deploy SSO xPaaS backed by a PostgreSQL database. Once deployed, an SSO realm, role, and user will be created to be used when configuring the SSO-enabled JBoss EAP xPaaS Image deployment. Once successfully deployed, the SSO user can then be used to authenticate and access JBoss EAP. - -=== Deploy the SSO Application Template - -. Log in to the OpenShift web console and select the project space. -. Click *Add to project* to list all of the default image streams and templates. -. Use the *Filter by keyword* search bar to limit the list to those that match _sso_. You may need to click *See all* to show the desired application template. -. Click the *_sso70-postgresql_* application template to list all of the deployment parameters. These parameters will be left as default for this example. -. Click *Create* to deploy the application template and start pod deployment. This may take a couple of minutes. - -=== Create SSO Credentials -Log in to the encrypted SSO web server at _https://secure-sso-./auth_ using the default _admin_/_admin_ user name and password. - -* *Create a Realm* - -. Create a new realm by hovering your cursor over the realm namespace (default is *Master*) at the top of the sidebar and click the *Add Realm* button. -. Enter a realm name and click *Create*. - -* *Copy the Public Key* -In the newly created realm, click the *Keys* tab and copy the public key that has been generated. This will be needed to deploy the SSO-enabled JBoss EAP image. - -* *Create a Role* -Create a role in SSO with a name that corresponds to the JEE role defined in the *web.xml* of the example application. This role will be assigned to an SSO _application user_ to authenticate access to user applications. - -. Click *Roles* in the *Configure* sidebar to list the roles for this realm. As this is a new realm, there should only be the default _offline_access_ role. Click *Add Role*. -. Enter the role name and optional description and click *Save*. - -* *Create Users and Assign Roles* -Create two users. The _realm management user_ will be assigned the *realm-management* roles to handle automatic SSO client registration in the SSO server. The _application user_ will be assigned the JEE role, created in the previous step, to authenticate access to user applications. - -Create the _realm management user_: - -. Click *Users* in the *Manage* sidebar to view the user information for the realm. Click *Add User*. -. Enter a valid *Username* and any additional optional information for the _realm management user_ and click *Save*. -. Edit the user configuration. Click the *Credentials* tab in the user space and enter a password for the user. After the password has been confirmed you can click the *Reset Password* button to set the user password. A pop-up window will prompt for additional confirmation. -. Click *Role Mappings* to list the realm and client role configuration. In the *Client Roles* drop-down menu, select *realm-management* and add all of the available roles to the user. This provides the user SSO server rights that can be used by the JBoSS EAP image to create clients. - -Create the _application user_: - -. Click *Users* in the *Manage* sidebar to view the user information for the realm. Click *Add User*. -. Enter a valid *Username* and any additional optional information for the _application user_ and click *Save*. -. Edit the user configuration. Click the *Credentials* tab in the user space and enter a password for the user. After the password has been confirmed you can click the *Reset Password* button to set the user password. A pop-up window will prompt for additional confirmation. -. Click *Role Mappings* to list the realm and client role configuration. In *Available Roles*, add the JEE role created earlier. - -=== Deploy the SSO-enabled JBoss EAP Image - -. Return to the OpenShift web console and click *Add to project* to list all of the default image streams and templates. -. Use the *Filter by keyword* search bar to limit the list to those that match _sso_. You may need to click *See all* to show the desired application template. -. Click the *_eap64-sso-s2i_* image to list all of the deployment parameters. Edit the configuration of the following SSO parameters: -+ -* *SSO_URI*: The SSO web server authentication address: _https://secure-sso-./auth_ -* *SSO_REALM*: The SSO realm created for this procedure. -* *SSO_USERNAME*: The name of the _realm management user_. -* *SSO_PASSWORD*: The password of the user. -* *SSO_PUBLIC_KEY*: The public key generated by the realm. It is located in the *Keys* tab of the *Realm Settings* in the SSO console. -* *SSO_BEARER_ONLY*: If set to *true*, the OpenID Connect client will be registered as bearer-only. -* *SSO_ENABLE_CORS*: If set to *true*, the Keycloak adapter enables Cross-Origin Resource Sharing (CORS). -. Click *Create* to deploy the JBoss EAP image. - -It may take several minutes for the JBoss EAP image to deploy. When it does, it can be accessed at: - -* _$$http://-./$$_: for the web server, and -* _$$https://secure--./$$_: for the encrypted web server, where is one of app-jee, app-profile-jee, app-profile-jee-saml, or service depending on the example application. - -==== Alternate Deployments -You can also create the client registration in the *Clients* frame of the *Configure* sidebar. Once a client has been registered, click the *Installation* tab and download the configuration *_.xml_*: - -* For OpenID Connect application sources, save the *Keycloak OIDC JBoss Subsystem XML* to *_/configuration/secure-deployments_*. -* For SAML application sources, save the *Keyclock SAML Wildfly/JBoss Subsystem* to *_/configuration/secure-saml-deployments_*. - -You can also edit the *_standalone-openshift.xml_* of the JBoss EAP image, which will deploy the manual configuration instead of the default. For more information, see xref:../../using_images/xpaas_images/eap.adoc#using-a-modified-jboss-eap-xpaas-image[Using a Modified JBoss EAP xPaaS Image]. - -=== Log in to the JBoss EAP Server Using SSO - -. Access the JBoss EAP application server and click *Login*. You will be redirected to the SSO login. -. Log in using the SSO user created in the example. You will be authenticated against the SSO server and returned to the JBoss EAP application server. - -== Known Issues - -* There is a known issue with the EAP6 Adapter _HttpServletRequest.logout()_ in which the adapter does not log out from the application, which can create a login loop. The workaround is to call _HttpSession.invalidate();_ after _request.logout()_ to clear the Keycloak token from the session. For more information, see https://issues.jboss.org/browse/KEYCLOAK-2665[KEYCLOAK-2665]. -* The SSO logs throw a duplication error if the SSO pod is restarted while backed by a database pod. This error can be safely ignored. -* Setting _adminUrl_ to a "https://..." address in an OpenID Connect client will cause *javax.net.ssl.SSLHandshakeException* exceptions on the SSO server if the default secrets (*sso-app-secret* and *eap-app-secret*) are used. The application server must use either CA-signed certificates or configure the SSO trust store to trust the self-signed certificates. -* If the client route uses a different domain suffix to the SSO service, the client registration script will erroneously configure the client on the SSO side, causing bad redirection. -* The SSO-enabled JBoss EAP image does not properly set the *adminUrl* property during automatic client registration. As a workaround, log in to the SSO console after the application has started and manually modify the client registration *adminUrl* property to *$$http://-./$$*. diff --git a/welcome/index.adoc b/welcome/index.adoc index 829065b0ee45..2c1bad4665e8 100644 --- a/welcome/index.adoc +++ b/welcome/index.adoc @@ -137,7 +137,6 @@ a|[none] * xref:../using_images/s2i_images/index.adoc#using-images-s2i-images-index[Web frameworks powered by Source-to-Image (S2I)] * xref:../using_images/db_images/index.adoc#using-images-db-images-index[Databases to back your application] ifdef::openshift-enterprise,openshift-dedicated,openshift-online[] -* xref:../using_images/xpaas_images/index.adoc#using-images-xpaas-images-index[Services provided by xPaaS Middleware Images] endif::[] * xref:../using_images/other_images/other_container_images.adoc#using-images-other-container-images[Or, bring and run any container image]