Liferay database configuration file




















However, there is one modification that is about getting rid of confusions with clearing distinction among Staging and Publications UI labels and phrases. LPS Trusted applications in OAuth2: there is no manual authentication grant that the user must take as a step.

This feature is added to be able to mark as trusted OAuth2 applications to avoid the manual authorization process during the process of obtaining a token. It is useful for different types of customers, who want to manage in which cases an authorization by the user would not be necessary. This feature is added to be able to remember the manual authorization to an OAuth2 application from a user in a device, it will avoid the manual authorization the following times trying to request a new token from that same device.

It is useful for different types of customers, who want to manage in which cases an authorization by the user in a device would be necessary only once during the time when their session in Portal is active. Remember my authorization grant feature is useful for pure JS Single Page Applications that execute features from an OAuth2 application.

LPS In a new column in the registered application list administrators can see if the specific app is marked as trusted or allowed to have the remember my authorization for them immediately. That helps administrators to realize them without going to their detailed page and so make it easier to handle them. LPS Function for being able to purge all the current authorizations for a specific app from the list of the registered apps. It is useful for OAuth2 administrators who can revoke all access to an application in a very quick way.

LPS Over the years, support for the IE11 browser has led to many workarounds, polyfills and even specific settings to build Javascript in order to overcome IE11 limitations. From 7. IE11 has been removed from our 7. Additionally, different browsers have already issued end of support notices or already removed support. Therefore, in Liferay 7. There is no feature which depends on Flash, no Flash applications.

This includes the Soy template engine and the SoyPortlet. Please note that using Soy client-side components is still possible because metal and metal-soy are still available. As a consequence, all soy-supporting tags have been upgraded to work without the Soy Engine for an easier upgrade path.

LPS To facilitate theme upgrade from Bootstrap 3 to 4, we added a compatibility layer in the form of some CSS files in the styled, classic and admin themes, which allowed us to use Bootstrap 3 styles. This layer consumed unnecessary resources and could conflict with other styles.

As a result of this removal, admin and classic themes become simpler and lighter, and all BS3 styles in the portal have been updated to use BS4. LPS For many years, DXP has had a complex minification scheme that involves minifying resources on the fly in the server. This has caused plenty of performance and runtime issues over time and has even stalled progress because the minifiers are outdated and don't support new language features.

As a result, If it is set to production, the build process will minify all js files after building them. Otherwise it will not, leaving modules unminified.

Run-time minification logic has been turned off by default, and marked as deprecated. Active 5 years, 9 months ago. Viewed 24k times. Improve this question. Olaf Kock Muhammad Faizan Muhammad Faizan 51 1 1 gold badge 1 1 silver badge 3 3 bronze badges. Also, for interests sake, the name is "Liferay" all one word, only the L is capitalized and not, life-ray ; — Ray.

Corrected name spelling in the question. For completeness sake you might want to edit the actual Liferay version number - I don't have all versions here, but you gave the tomcat version number, not Liferay's.

Add a comment. Active Oldest Votes. Add these lines to it: jdbc. Driver jdbc. The provider configuration file contains the fully qualified class names FQCN of your service providers, one name per line. The file must be UTF-8 encoded. Additionally, you can include comments in the file by beginning the comment line with the number sign. You can download the latest version binary jar from Maven Central Repository liferay-portal-database-all-in-one-support , by doing so you can avoid doing the build.

Since version 7. In the case of the Tomcat bundle, the new directory is: tomcat Below you can see the portal-ext. You could also configure database access as a JNDI resource and specify the resource name in configuration. In order for Liferay to be able to connect to the database, it is necessary to install the JDBC driver compatible with the version of the specific database and JVM.

Here are the links to the resources to download the JDBC driver. The following documents see database section provide details of the configurations that are certified by Liferay.

You can see the complete documents on Liferay Portal. Basically this version by default is not ready to be run in clustered mode because it only contains "non-clustered" version of three modules:. We need to disable them and instead use the "multiple" versions of these. The article linked above describes the steps for obtaining compiling them so I won't go into too many details here.

I will just describe what to do with them once you get them. Note: if you use the 7. Note2 : you might not need that step at all if your version already supports multiple module versions.

This is true if you use DXP version or if you use newer version of Liferay - just run the gogo shell command in your Liferay "lb portal. The Dockerfile-Liferay we defined before will do the rest i. We are almost done but we still didn't tell Liferay that we want to use clustered environment. How do we do that? Well it's quite simple actually - all we need is to add following line into our portal-ext. But hey - is that that simple?

Well actually it is - at least in some cases. Basically this property makes Liferay to use JGroups channel to communicate with each other on default ips and ports using UDP multicast connection. Therefore it will only work if the Liferay instances are able to communicate with each other through multicast - this is not always the case. Even if you have servers on the same subnet this still might not work as for example Azure doesn't support multicast in virtual networks and they're not planning to add last time I checked.

In our case it will work though as we use docker-compose on one machine and docker will provide multicast connection. If you need other solutions then you can check Liferay documentation where other method is described unicast over TCP.

For our environment this is enough though. We don't need to do anything else but you might also want to add one more property:. This will add displaying a message with current Liferay's node. This is useful for testing purposes so I encourage you to do that. Of course you might want to add other properties you need. You might want to check the Liferay docs to read how exactly the JGroup communication works.

The idea is quite simple and clever and it should take you just a couple of minutes to read that and understand the basics. Please note that this step is totally optional. You might already have a load balancer or HTTP server like Apache or Nginx installed on your server and then you can't declare another one here and instead you should do the load balancing in your current application or ask your server admistrator to do that for you. If you want or need to do it by yourself you can always google it like "Load balancing Apache2" or "Load balancing HAProxy" etc.

You can also follow steps below for HAProxy configuration on docker but the steps are similiar if you have it installed as a regular service.

Important you need to have sticky session in your load balancer. This is important as session is by default not shared between cluster nodes so if the user gets redirected to another node he will be logged out. This might be an issue if you use Nginx which is really popular as by default sticky sesions are available only in paid version of the app as far as I'm concerned.

So the last thing we need is load balancing between our servers. We obviously don't want user to pick the server he wants to enter himself. Instead we want to have one url like app. The digram below shows the basic idea: Digram of the architecture In the example I will use HAProxy but you can use whatever you want as this step is totally independent from Liferay see the "Note" above.

So first we want to build another Dockerfile. Lets call it "Dockerfile-haproxy" and copy the content inside:. There is nothing special in this code. We just copy our config file to HAProxy and this is actually the recommended way shown on Docker Hub. Now lets define our service in docker-compose. We point to our dockerfile and use the current directory as context. We also expose required port and set hostname. I believe it's quite clear.

We also need to fill the haproxy.



0コメント

  • 1000 / 1000