Strona główna > Java, programming > Internationalization of Web DynPro components part 2 : resource bundles and PDF’s

Internationalization of Web DynPro components part 2 : resource bundles and PDF’s


In addition to S2X-based i18n you can use classic java properties-based i18n. In some cases it can be more convenient, f.g. property bundles are created by tools such as eclipse string externationalization.

But you must remember that string externationalization tool was designed for heavy client desktop application – in web systems ResourceBundle.getBundle and similar methods are unadequate – they return server locale which is in most cases unadequate, and not the locale the user is using!

The following code will use the user’s current locale:

public static String getString(String key) {
  // Get the locale of the current session
  Locale sessionLocale = WDResourceHandler.getCurrentSessionLocale();
  IWDResourceHandler resourceHandler = WDResourceHandler
    WnioskiListMessages.class.getClassLoader() );
  try {
    return resourceHandler.getString(key);
  catch (MissingResourceException e) {
    return '!' + key + '!';

You can use this code as replacement for Eclipse-generated code.

Another case to be dealt with is the usage of the localized strings. When they are used to display messages, it’s no problem, because SAP engine is using unicode encoding. However, with PDF it’s not so easy.

I will describe what to do if you use popular iText library. First if you add some text, you provide font. Fonts can be created via BaseFont.createFont. There are a couple of fonts ready to use by PDF engine, which names are defined in BaseFont class. However, when using this fonts, you can’t use IDENTITY-H encoding, which covers unicode character range. And when providing multilanguage version, you can’t expect that user will provide characters from limited range. Event with english version it’s the case. Imagine someone writes info about planned business trip to Russia and wants to provide the name of target city in Russian characters… The best solution would be to analyze each character and dynamically change font used in PDF to those covering the given part of text. Good enough in most cases would be to use font that has good coverage of unicode characters. For asian scripts there would be a great problem to find such font, but I will consider european languages only for my needs.

When you digg into iText tutorials, you’ll find that one that uses BaseFont.createFont using as second character windows’ standard arial unicode font. This font covers very large number of national character sets. The problem is, the target machine for your application need not to be Windows… and you don’t have rights to embed that font in PDF or your application.

But there is a plenty of free fonts available. The font I’ve found good enough is FreeSerif from freefonts package, which can be downloaded from

After downloading I’ve made jar file from it, placing all fonts in package named fonts. Then I’ve added it to library component, wrapped it with J2EE Server Library DC and deployed to server. After that I was able to embed and use my fonts using following code:

String fontPath = getClass().getClassLoader()
BaseFont baseFont = BaseFont.createFont(
  fontPath, BaseFont.IDENTITY_H, BaseFont.EMBEDDED)

%d blogerów lubi to: