Tag Archives: eclipse

Publish an Eclipse p2 repository on Sourceforge with rsync

This can be seen as a follow-up post of my previous post on building Eclipse p2 composite repositories. In this blog post I’ll show an automatic way for publishing an Eclipse p2 (composite) repository (a.k.a. update site) on Sourceforge, using rsync for synchronization. You may find online many posts about publishing update sites on Github pages and recently on bintray. (as a reminder, rsync is a one-way synchronization tool, and we assume that the master replica is the one on sourceforge; rysnc, being a synchronization tool, will only transfer the changed files during synchronization).

I prefer sourceforge for some reasons:

  • you have full and complete access to the files upload system either with a shell or, most importantly for the technique I’ll describe here, with rsync. From what I understand, instead, bintray will manage the binary artifacts for you;
  • in order to create and update a p2 composite site you must have access to the current file system layout of the p2 update site, which I seem to understand is not possible with bintray;
  • you have download statistics and your artifacts will automatically mirrored in sourceforge’s mirrors.

By the way: you can store your git repository anywhere you want, and publish the binaries on sourceforge. (see this page and this other page).

I’ll reuse the same example of the previous post, the repository found here https://github.com/LorenzoBettini/p2composite-example, where you find all the mechanisms for creating and updating a p2 composite repository.

The steps of the technique I’ll describe here can be summarized as follows: when it comes to release a new child in the p2 composite update site (possibly already published on Sourceforge), the following steps are performed during the Maven/Tycho build

  1. Use rsync to get an update local version of the published p2 composite repository somewhere in your file system (this includes the case when you never released a version, so you’ll get a local empty directory)
  2. Build the p2 repository with Tycho
  3. Add the above created p2 repository as a new child in the local p2 composite repository (this includes the case where you create a new composite repository, since that’s your first release)
  4. Use rsync to commit the changes back to the remote p2 composite repository

Since we use rsync, we have many opportunities:

  • we’re allowed to manually modify (i.e., from outside the build infrastructure) the p2 composite repository, for instance by removing a child repository containing a wrong release, and commit the changes back;
  • we can release from any machine, notably from Jenkins or Hudson, since we always make sure to have a synchronized local version of the released p2 composite repository.

Prepare the directory on Sourceforge

This assumes that you have an account on Sourceforge, that you have registered a project. You need to create the directory that will host your p2 composite repository in the “Files” section.

For this example I created a new project eclipseexampleshttps://sourceforge.net/projects/eclipseexamples/, and I plan to store the p2 composite in the sourceforge file system on this path: p2composite.example/updates.

So I’ll create the directory structure accordingly (using the “Add Folder” button:

sourceforge create folder structure 1 sourceforge create folder structure 2 sourceforge create folder structure 3

Ant script for rsync

I’m using an ant script since it’s easy to call that from Maven, and also manually from the command line. This assumes that you have already rsync installed on your machine (or in the CI server from where you plan to perform releases).

This ant file is meant to be completely reusable.

Here’s the ant file

We have a macro for invoking rsync with the desired options (have a look at rsync documentation for understanding their meaning, but it should be straightforward to get an idea).

In particular, the transfer will be done with ssh, so you must have an ssh key pair, and you must have put the public key on your account on sourceforge. Either you created the key pair without a passphrase (e.g., for releasing from a CI server of your own), or you must make sure you have already unlocked the key pair on your local machine (e.g., with an ssh-agent, or with a keyring, depending on your OS).

The arguments source and dest will depend on whether we’re doing an update or a commit (see the two ant targets). If you define the property dryrun as -n then you can simulate the synchronization (both for update and commit); this is important at the beginning to make sure that you synchronize what you really mean to synchronize. Recall that when you perform an update, specifying the wrong local directory might lead to a complete deletion of that directory (the same holds for commit and the remote directory). Moreover, source and destinations URLs in rsync have a different semantics depending on whether they terminate with a slash or not, so make sure you understand them if you need to customize this ant file or to pass special URLs.

The properties rsync.remote.dir and rsync.local.dir will be passed from the Tycho build (or from the command line if you call the ant script directly). Once again, please use the dryrun property until you’re sure that you’re synchronizing the right paths (both local and remote).

Releasing during the Tycho build

Now we just need to call this ant’s targets appropriately from the Tycho build; I’ll do that in the pom.xml of the project that builds and updates the composite p2 repository.

Since I don’t want to push a new release on the remote site on each build, I’ll configure the plugins inside a profile (it’s up to you to decide when to release): here’s the new part:

Now the URL to access a remote path on sourceforge with ssh has the following shape

<username>,<project>@frs.sourceforge.net:/home/frs/project/<project>/<path>

So in my case I specified (again, the final / is crucial for what we want to synchronize with rsync, see the note above):

lbettini,eclipseexamples@frs.sourceforge.net:/home/frs/project/eclipseexamples/p2composite.example/updates/

The local URL specifies where the local p2 composite site is stored (see the previous post), in this example it defaults to

${user.home}/p2.repositories/updates/

Again, the final / is crucial.

We configured the maven-antrun-plugin with two executions:

  1. before updating the p2 composite update site (phase prepare-package) we make sure we have a synchronized local version of the repository
  2. after updating the p2 composite update site (phase verify) we commit the changes to the remote repository
  3. That’s all :)

Let’s try it

Of course, if you want to try it, you need a project on sourceforge and a directory on that project’s Files section (and you’ll have to change the URLs accordingly in the pom file).

To perform a release we need to call the build enabling the profile release-composite, and specify at least verify as goal:

Let’s say we still haven’t released anything.

Since the remote directory is empty, in our local file system we’ll simply have the directory created. In the end of the build, the composite site is created and the remote directory will be synchronized with our local contents:

Let’s have a look at the remote directory, it will contain the create p2 composite site

sourceforge uploaded artifacts 1

sourceforge uploaded artifacts 2

Let’s perform another release; Our local copy is up-to-date so we won’t receive anything during the update phase, but then we’ll commit another release

Let’s have a look at sourceforge and see the new release

sourceforge uploaded artifacts 3

Let’s remove our local copy and try to perform another release, this time the update phase will make sure our local composite repository is synchronized with the remote site (we’ll get the whole composite site we had already released), so that when we add another composite child we’ll update our local composite repository; then we’ll commit the changes to the server (again, by uploading only the modified files, i.e., the compositeArtifacts.xml and compositeContent.xml and the new directory with the new child repository:

Again, the remote site is correctly updated

sourceforge uploaded artifacts 4

Providing the URL of your p2 repository

Now that you have your p2 repository on sourceforge, you only need to give your users the URL to use for installing your features in Eclipse.

You have two forms for the URL

  • This will use the mirror infrastructure of sourceforge: http://sourceforge.net/projects/<project>/files/<path>
  • This will bypass mirrors: http://master.dl.sourceforge.net/project/<project>/<path>

If you use the mirror form, when installing in Eclipse (or provisioning a target platform) you’ll see warnings on the console of the shape

But it’s safe to ignore them.

For our example the URL can be one of the following:

  • With mirrors: http://sourceforge.net/projects/eclipseexamples/files/p2composite.example/updates/
  • Main site: http://master.dl.sourceforge.net/project/eclipseexamples/p2composite.example/updates/

You may want to try them both in Eclipse.

Please keep in mind that you may hit some unavailability errors now and then, if sourceforge sites are down for maintenance or unreachable for any reason… but that’s not much different when you hit a bad Eclipse mirror, or the main Eclipse download site is down… I guess no hosting site is perfect anyway ;)

I hope you find this blog post useful, Happy releasing! :)

 

Be Sociable, Share!

Creating p2 composite repositories during the build

I like to build p2 composite repositories for all my Eclipse projects, to keep all the versions available for consumption.

Quoting from https://wiki.eclipse.org/Equinox/p2/Composite_Repositories_(new)

The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

The nice thing of composite repositories is that they can be nested at any level. Thus, I like to have nested composite repositories according to the major.minor, major.minor.service.qualifier.

Thus the layout of the p2 composite repository should be similar to the following screenshot

p2composite1

Note that the name of the directories that contain a standard p2 repository have the same name of the contained feature.

The key points of a p2 composite repository are the two files compositeArtifacts.xml and compositeContent.xml. Their structure is simple, e.g.,

Note that a child location is intended relative to the path of these files; you can also specify absolute paths, not to mention http urls to other remote p2 sites.

The structure is not that complex, so you can also create it by hand; but keeping it up to date might not be that trivial. With that respect, p2 provides some ant tasks for managing composite repositories (creating, adding an entry, removing an entry), and that’s my favorite way to deal with composite repositories. I’ll detail what I usually do in this blog post, in particular, how to create (or update) a p2 composite repository with a new entry during the build.

The ant file is completely reusable and customizable by passing properties; you can reuse it as it is, after you setup your pom.xml as detailed below.

In this blog post I’ll show how to do that with Maven/Tycho, but the same procedure can be done in a Buckminster build (as I’ll hint at the end).

I’ll use a simple example, https://github.com/LorenzoBettini/p2composite-example, consisting of a plug-in project, a feature project, a project for the site, and a releng project (a Maven/Tycho parent project). The plug-in and feature project are not interesting in this context: the most interesting one is the site project (a Tycho eclipse-repository packaging type).

Of course, in order to run such ant tasks, you must run them using the org.eclipse.ant.core.antRunner application. Buckminster, as an Eclipse product, already contains that application. With Tycho, you can use the tycho-eclipserun-plugin, to run an Eclipse application from Maven.

We use this technique for releasing a new version of our EMF-Parsley Eclipse project. We do that directly from our Hudson HIPP instance; the idea is that the location of the final main composite site is the one that will be served through HTTP from the download.eclipse.org. We have a dedicated Hudson job that will release a new version and put it in the composite repository.

The ant file

The internal details of this ant files are not necessary to reuse it, so you can skip the first part of this section (you only need to know the main properties to pass). Of course, if you read it and you have suggestions for improve it, I’d be very grateful :)

The ant file consists of some targets and macro definitions.

The main macro definition is the one invoking the p2 ant task:

Note that we’ll also create a p2.index file. I prefer not to compress the compositeArtifacts.xml and compositeContent.xml files for easier inspection or manual modification, but you can compress them setting the “compressed” to “true” property above.

This macro will be called twice in the main task

First of all, this task will copy the p2 repository created during the build in the correct place inside the nested p2 composite repository.

Then, it will create or update the composite site for the nested repository major.minor, and then it will create or update the composite site for the main site (the one storing all the versions). The good thing about these ant tasks is that if you add a child location that already exists they won’t complain (though you can set a property to make them fail in such situations); this is crucial for updating the main repository, since most of the time you will not release a new major.minor.

This target calls (i.e., depends on) another target to compute the properties to pass to the macrodef, according to the information passed from the pom.xml

Default properties (that can be modified by passing a value from the pom.xml file):

  • software.download.area: the absolute path of the parent folder for the composite p2 site (default is “p2.repositories” in your home directory)
  • updates.dir: the relative path of the composite p2 site (default is “updates”); this is relative to software.download.area

Thus, by default, the main p2 composite update site will end in ${user.home}/p2.repositories/updates. As hinted in the beginning, this can be any absolute local file system path; in EMF-Parsley Eclipse, since we release from Hudson, it will be the path served by the Eclipse we server download.eclipse.org. So we specify the two above properties accordingly.

These are the properties that must be passed from the pom.xml file

  • site.label: the main label that will appear in the composite site (and that will be recorded in the “Eclipse available sites”). The final label will be “${site.label} All Versions” for the main site and “${site.label} <major.minor>” for the nested composite sites.
  • project.build.directory: the location of the p2 repository created during the build (usually of the shape <project.id>/target/repository)
  • unqualifiedVersion: the version without qualifier (e.g., 1.1.0)
  • buildQualifier: the replaced qualifier in the built version

Note that except for the first property, the other ones have exactly the same name as the ones in Tycho (and are set by Tycho directly during the build, so we’ll reuse them).

The ant file will use an additional target (not shown here, but you’ll find it in the sources of the example) to extract the major.minor part of the passed version.

Calling the ant task from pom.xml

Now, we only need to execute the above ant task from the pom.xml file of the eclipse-repository project,

ATTENTION: in the following snipped, for the sake of readability, I split the <appArgLine> into several lines, but in your pom.xml it must be exactly in one (long) line.

As I said, you should pass site.label as you see fit (for the other properties you can use the default).

You may want to put this plugin specification inside a Maven profile, that you activate only when you are actually doing a release (see, e.g., what we do in this pom.xml, taken from our EMF-Parsley Eclipse project).

Try the example

Let’s simulate some releases:

To see what you get, just clone the repository found here https://github.com/LorenzoBettini/p2composite-example, cd to p2composite.example.tycho and run

After Maven finished downloading all the dependencies you should see something like

And here’s the directory layout of your ${user.home}/p2.repositories

p2composite2Run the command again, and you’ll get another child in the nested composite repository 1.0 (the qualifier has been replaced automatically with the new timestamp):

p2composite3Let’s increase the service number, i.e., 1.0.1, (using the tycho-versions-plugin) and rebuild:

and the new child will still be in 1.0 folder:

p2composite4Let’s increase the minor number, i.e., 1.1.0 and rebuild

and you’ll get another major.minor child repository

p2composite5Let’s increase the major number, i.e., 2.0.0

and you’ll get another major.minorp2composite6and so on :)

With Buckminster

As I hinted before, with Buckminster you can directly call the p2 ant tasks, since they are included in the Buckminster headless product. You will only need to add custom actions in the .cspec (or in the .cspex if you’re inside a plugin or feature project) that call the ant task passing the right properties. An example can be found here. This refers to a slightly different ant file from the one shown in this blog post, but the idea is still the same.

Possible Improvements

You may want to add another nesting level, e.g., major -> major.minor etc… This should be straightforward: you just need to call the macrodef another time, and compute the main update site directory differently.

Hope this helps.

 

 

Be Sociable, Share!

Analyzing Xtend code with Sonarqube

I recently started to play with Sonarqube to reduce “technical debt” and hopefully improve code quality (see my previous post). I’d like to report on my experiences about using Sonarqube to analyze Xtend code.

Xtend compiles into Java source code, so it looks like it is trivial to analyze it with Sonarqube; of course, Sonarqube will analyze the generated Java code, but it’s rather easy to refer to the original Xtend code, since Xtend generates clean Java code :)

However, we Sonarqube 4.4 it looks like it’s harder than I thought due to some facts:

My starting point was another issue: test results did not show in the Sonarqube 4.4 web interface, and that was because test detection has changed in version 4 (http://sonarqube.15.x6.nabble.com/quot-Unit-test-success-quot-in-Sonarqube-4-4-td5028019.html).

I created an example to reproduce the problem and propose a solution: https://github.com/LorenzoBettini/tycho-xtend-sonar.

In the parent project we specify the actual project with sources to be analyzed, and the project containing tests (in this example I also use jacoco for code coverage, but that’s not crucial for this example):

And we enable all the Maven plug-ins for

The plugin and the plugin.tests projects intentionally contain Xtend and Java files with some Findbugs issues, e.g.,

Now, assuming you have Sonarqube 4.4 running on your machine, you can run the typical maven commands to analyze your code (make sure you set the MaxPermSize in the MAVEN_OPTS otherwise the Xtend compiler will run out of memory):

If you go to Sonarqube web interface you see

sonarqube xtend 1So you see that Sonarqube correctly detected Findbugs issues in all the Java files, but for the Java code generated by Xtend, it only detected the issues in the plugin.tests project, not on the plugin project (as explained here http://sonarqube.15.x6.nabble.com/sonarqube-findbugs-and-generated-sources-td5028237.html, Sonarqube does “not take into consideration this suppress warnings annotation in test files”).

To deal with this problem, I created an ant file which basically removes all the @SuppressWarnings(“all”) annotations in all the generated Java files in the xtend-gen folder:

and I created a Maven profile in the parent pom that, when activated, invokes the ant target, in the process-sources phase (recall that this phase is executed after generate-sources phase, when the Xtend files are compiled into Java code)

Now, let’s invoke the two maven commands, but this time, the first one activates the above profile

OK, let’s go to the “Issues Drilldown” in the Sonarqube web interface and this time the issues are detected also in the plugin project:

sonarqube xtend 2You may want to select “Since previous analysis” in the combo box, to make sure that this analysis detected these new issues:

sonarqube xtend 3

Hope this helps! :)

The source code can be found here: https://github.com/LorenzoBettini/tycho-xtend-sonar.

Be Sociable, Share!

Dealing with Technical Debt with Sonarqube: a case study with Xsemantics

I only recently started to play with Sonarqube to reduce “technical debt” and hopefully improve code quality. I’d like to report on my experiences about using Sonarqube to analyze Xsemantics, a DSL for writing rule systems (e.g., type systems) for Xtext languages.

I was already using the Jenkins Continuous Integration server, and while building I was already using Findbugs and Jacoco, thus, I was already analyzing such software, but Sonarqube brings new analysis rules for Java programs and it also integrates results from Findbugs and Jacoco, aggregating all the code quality results in a web site.

In spite of the Jenkins builds Sonarqube detected some issues when I started

xsemantics sonarqube 1

First of all, I had to exclude the src-gen and emf-gen directories (the former is where Xtext generates all its artifacts, and the latter is where Xcore generates the EMF model files); since these are generated files and I did not want to make them part of the analysis. I’ve done such exclusion with a property in the main pom.xml (for readability I split it into lines):

Note that for the moment I’m also excluding tests from the analysis… it is considered best practice to analyse tests as well (and I have many of them), but I wanted to concentrate on the code first. I also excluded other Java files for which issues are reported, like the Xtext Guice modules, due to the wildcards in the method signatures… I have to live with them anyway :)

After that the number of issues reduced a little bit, but there were still some issues to fix; most of them were easy, basically due to Java conventions I hadn’t use (e.g., name of fields and methods or even names of type parameters).

One of the major ones was due to the wrong implementation of the clone method (“super.clone() should be called when overriding Object.clone()” (https://github.com/LorenzoBettini/xsemantics/issues/34).

Another thing that I had never considered was dependency cycles among Java packages and files. Sonarqube reports them. Luckily there were only few of them in Xsemantics, and the hardest part was to read the Dependency Structure Matrix, but in the end I managed to remove them (there must be nothing in the upper triangle to have no cycle):

xsemantics sonarqube 2

To solve the cycles I had to change something in the runtime API (http://xsemantics.sourceforge.net/snapshots-for-xsemantics-1-6-for-xtext-2-7/) but it was basically a matter of moving Java classes into different packages.

Then came the last major issue: Duplicated Code!!! All by itself this issue was estimated with 13 days of technical debt! And most of the duplicated code was in the model inferrer (a concept from Xbase). Moreover, such inferrer is written in Xtend, a cleaner Java, and the Xtend compiler then generates Java code. Thus, Sonarqube analyses the generated Java code, and the detected duplicate code blocks are on the Java code. This means that it takes some time to understand the corresponding original Xtend code. That’s not impossible since Xtend generates clean Java code, but it surely adds some work :)

Before starting to remove duplicated code (around 80 blocks in the generated Java code) the Xtend inferrer was around 1090 lines long (many parts are related to string templates for code generation) corresponding to around 2500 lines of generated Java code! After the refactoring the Xtend inferrer was around 1045 lines long, and the generated Java code reduced to around 2000 lines.

That explains also the reduction of lines of code and complexity:

xsemantics sonarqube 3

But now technical debt is 0 :)

xsemantics sonarqube 4

And it’s nice to look at this dashboard :)

xsemantics sonarqube 5

By the way, I also had to disable some issues I did not agree on (tabulation characters) and avoid reported issues on method name conventions on a specific file (because methods that start with the underline characters _ have a specific meaning in Xtext/Xtend). Instead of disabling them on the Sonarqube web interface, I preferred to disable them using properties in the pom file so that it works across different Sonarqube installations (e.g., I also have a local Sonarqube instance on my machine to do some quick experiments). Such multi properties are not officially supported in the Sonar invocation (e.g., through the sonar runner or via Maven), but I found a workaround: http://stackoverflow.com/questions/21825469/configure-sonar-sonar-issue-ignore-multicriteria-through-maven (but, be careful, it is considered a hack as reported in the mailing list: http://sonarqube.15.x6.nabble.com/sonar-issue-ignore-multicriteria-td5021722.html):

That’s all! I strongly suggest to give Sonarqube a try! :)

Be Sociable, Share!

Switching to Xcore in your Xtext language

This is a followup of my previous post, Switching from an inferred Ecore model to an imported one in your Xtext grammar. The rationale for switching to manually maintained metamodel can be found in the previous post. In this post, instead of using an Ecore file, we will use Xcore,

Xcore is an extended concrete syntax for Ecore that, in combination with Xbase, transforms it into a fully fledged programming language with high quality tools reminiscent of the Java Development Tools. You can use it not only to specify the structure of your model, but also the behavior of your operations and derived features as well as the conversion logic of your data types. It eliminates the dividing line between modeling and programming, combining the advantages of each.

I took inspiration from Jan Köhnlein’s blog post; after switching to a manually maintained Ecore in Xsemantics, I felt the need to further switch to Xcore, since I had started to write many operation implementations in the metamodel, and while you can do that in Ecore, using Xcore is much easier :) Thus in my case I was starting from an existing language, not to mention the use of Xbase (not covered in Jan’s post). Things were not easy, but once the procedure works, it is easily reproducible, and I’ll detail this for a smaller example.

So first of all, let’s create an Xtext project, org.xtext.example.hellocustomxcore, (you can find the sources of this example online at https://github.com/LorenzoBettini/Xtext2-experiments); the grammar of the DSL is not important: this is just an example. We will first start developing the DSL using the automatic Ecore model inference and later we will switch to Xcore.

(the language is basically the same of the previous post).

The grammar of this example is as follows:

and we run the MWE2 generator.

To have something working, we also write an inferrer

With this DSL we can write programs of the shape (nothing interesting, this is just an example)

Now, let’s say we want to check in the validator that there are no elements with the same name; since both “Hello” and “Greeting” have the feature name, we can introduce in the metamodel a common interface with the method getName(). OK, we could achieve this also by introducing a fake rule in the Xtext grammar, but let’s do that with Xcore.

Switching to Xcore

Of course, first of all, you need to install Xcore in your Eclipse.

Before we use the export wizard, we must make sure we can open the generated .genmodel with the “EMF Generator” editor (otherwise the export will fail). If you get an error opening such editor about resolving proxy to JavaJVMTypes.ecore like in the following screenshot…

gemodel_problems

..then we must tweak the generated .genmodel and add a reference to JavaVMTypes.genmodel: open HelloXcore.genmodel with the text editor, and search for the part (only the relevant part of the line is shown)

and add the reference to the JavaVMTypes.genmodel:

Since we’re editing the .genmodel file, we also take the chance to modify the output folder for the model files to emf-gen (see also later in this section for adding emf-gen as a source folder):

And we remove the properties that relate to the edit and the editor plug-ins (since we don’t want to generate them anyway):

Now save the edited file, refresh the file in the workspace by selecting it and pressing F5 (yes, also this operation seems to be necessary), and this time you should be able to open it with the “EMF Generator” editor. We can go on exporting the Xcore file.

We want the files generated by Xcore to be put into the emf-gen source folder; so we add a new source folder to our project, say emf-gen, where all the EMF classes will be generated; we also make sure to include such folder in the build.properties file.

First, we create an .xcore file starting from the generated .genmodel file:

  • navigate to the HelloXcore.genmodel file (it is in the directory model/generated)
  • right click on it and select “Export Model…”
  • in the dialog select “Xcore”
    gemodel_export1
  • The next page should already present you with the right directory URI
    gemodel_export2
  • In the next page select the package corresponding to our DSL, org.xtext.example.helloxcore.helloxcore (and choose the file name for the exported .xcore file corresponding Helloxcore.xcore file)
    gemodel_export3
  • Then press Finish
  • If you get an error about a missing EObjectDescription, remove the generated (empty) Helloxcore.xcore file, and just repeat the Export procedure from the start, and the second time it should hopefully work

gemodel_export4

The second time, the procedure should terminate successfully with the following result:

  • The xcore file, Helloxcore.xcore has been generated in the same directory of the .genmodel file (and the xcore file is also opened in the Xcore editor)
  • A dependency on org.eclipse.emf.ecore.xcore.lib has been added to the MANIFEST.MF
  • The new source folder emf-gen is full of compilation errors

gemodel_export5

Remember that the model files will be automatically generated when you modify the .xcore file (one of the nice things of Xcore is indeed the automatic building).

Fixing the Compilation Errors

These compilation errors are expected since Java files for the model are both in the src-gen and in the emf-gen folder. So let’s remove the ones in the src-gen folders (we simply delete the corresponding packages):

gemodel_export6

After that, everything compile fines!

Now, you can move the Helloxcore.xcore file in the “model” directory, and remove the “model/generated” directory.

Modifying the mwe2 workflow

In the Xtext grammar, HelloXcore.xtext, we replace the generate statement with an import:

The DirectoryCleaner fragment related the “model” directory should be removed (otherwise it will remove our Helloxcore.xcore file as well); and we don’t need it anymore after we manually removed the generated folder with the generated .ecore and .genmodel files.

Then, in the language part, you need to loadResource the XcoreLang.xcore, the Xbase and Ecore ecore and genmodel, and finally the xcore file you have just exported, Helloxcore.xcore.

We can comment the ecore.EMFGeneratorFragment (since we manually maintain the metamodel from now on).

The MWE2 files is now as follows (I highlighted the modifications):

Before running the workflow, you also need to add org.eclipse.emf.ecore.xcore as a dependency in your MANIFEST.MF.

We can now run the mwe2 workflow, which should terminate successfully.

We must now modify the plugin.xml (note that there’s no plugin.xml_gen anymore), so that the org.eclipse.emf.ecore.generated_package extension point contains the reference to the our Xcore file:

Fixing Junit test problems

As we saw in the previous post, Junit tests do not work anymore with errors of the shape

All we need to do is to modify the StandaloneSetup in the src folder (NOT the generated one, since it will be overwritten by subsequent MWE2 workflow runs) and override the register method so that it performs the registration of the EPackage (as it used to do before):

And now the Junit tests will run again.

Modifying the metamodel with Xcore

We can now customize our metamodel, using the Xcore editor.

For example, we add the interface Element, with the method getName() and we make both Hello and Greeting implement this interface (they both have getName() thus the implementation of the interface is automatic).

Using the Xcore editor is easy, and you have content assist; as soon as you press save, the Java files will be automatically regenerated:

xcore_modify1

We also add a method getElements() to the Model class returning an Iterable<Element>(containing both the Hello and the Greeting objects). This time, with Xcore, it is really easy to do so (compare that with the procedure of the previous post, requiring the use of EAnnotation in the Ecore file), since Xcore uses Xbase expression syntax for defining the body of the operations (with full content assist, not to mention automatic import statement insertions). See also the generated Java code on the right:

xcore_modify2

And now we can implement the validator method checking duplicates, using the new getElements() method and the fact that now both Hello and Greeting implement Element:

That’s all! I hope you found this tutorial useful :)

 

Be Sociable, Share!

Switching from an inferred Ecore model to an imported one in your Xtext grammar

When you use Xtext for developing your language the Ecore model for the AST is automatically derived/inferred from the grammar. If your DSL is simple, this automatic meta-model inference is usually enough. However, there might be cases where you need more control on the meta-model and in such cases you will want to switch from an inferred Ecore model to a an imported one, which you will manually maintain. This is documented in the Xtext documentation, and in some blog posts. When I needed to switch to an imported Ecore model for Xsemantics, things have not been that easy, so I thought to document the steps to perform some switching in this tutorial, using a simple example. (I should have talked about that in my Xtext book, but at that time I ran out of pages so there was no space left for this subject :)

So first of all, let’s create an Xtext project, org.xtext.example.hellocustomecore, (you can find the sources of this example online at https://github.com/LorenzoBettini/Xtext2-experiments); the grammar of the DSL is not important: this is just an example. We will first start developing the DSL using the automatic Ecore model inference and later we will switch to an imported Ecore.

The grammar of this example is as follows (to make things more interesting, we will also use Xbase):

and we run the MWE2 generator.

To have something working, we also write an inferrer

With this DSL we can write programs of the shape (nothing interesting, this is just an example)

Now, let’s say we want to check in the validator that there are no elements with the same name; since both “Hello” and “Greeting” have the feature name, we can introduce in the Ecore model a common interface with the method getName(). OK, we could achieve this also by introducing a fake rule in the Xtext grammar, but let’s switch to an imported Ecore model so that we can manually modify that.

Switching to an imported Ecore model

First of all, we add a new source folder to our project, say emf-gen, where all the EMF classes will be generated; we also make sure to include such folder in the build.properties file:

Remember that, at the moment, the EMF classes are generated into the src-gen folder, together with other Xtext artifacts (e.g., the ANTLR parser):

imported-ecore-project-layout1

Xtext generates the inferred Ecore model file and the GenModel file into the folder model/generated

imported-ecore-project-layout2

This is the new behavior introduced in Xtext 2.4.3 by the fragment ecore.EMFGeneratorFragment that replaces the now deprecated ecore.EcoreGeneratorFragment; if you still have the deprecated fragment in your MWE2 files, then the Ecore and the GenModel are generated in the src-gen folder.

Let’s rename the “generated” folder into “custom” (if in the future for any reason we want to re-enable Xtext Ecore inference, our custom files will not be overwritten):

imported-ecore-project-layout3

NOTE: if you simply move the .ecore and .genmodel file into the directory model, you will not be able to open the .ecore file with the Ecore editor: this is due to the fact that this Ecore file refers to Xbase Ecore models with a relative path; in that case you need to manually adjust such references by opening the .ecore file with the text editor.

From now on, remember, we will manually manage the Ecore file.

Now we change the GenModel file, so that the EMF model classes are generated into emf-gen instead of src-gen:

imported-ecore-genmodelWe need to change the MWE2 file as follows:

  • Enable the org.eclipse.emf.mwe2.ecore.EcoreGenerator fragment that will generate the EMF classes using our custom Ecore file and GenModel file; indeed, you must refer to the custom GenModel file; before that we also run the DirectoryCleaner on the emf-gen folder (this way, each time the EMF classes are generated, the previous classes are wiped out); enable these two parts right after the StandaloneSetup section;
  • Comment or remove the DirectoryCleaner element for the model directory (otherwise the workflow will remove our custom Ecore and GenModel files);
  • In the language section we load our custom Ecore file,
  • and we disable ecore.EMFGeneratorFragment (we don’t need that anymore, since we don’t want the Ecore model inference)

The MWE2 files is now as follows (I highlighted the modifications):

We add the dependency org.eclipse.xtext.ecore in the MANIFEST.MF:imported-ecore-manifestIn the Xtext grammar we replace the generate statement with an import statement:

Now we’re ready to run the MWE2 workflow, and you should get no error (if you followed all the above instructions); you can see that now the EMF model classes are generated into the emf-gen folder (the corresponding packages in the src-gen folders are now empty and you can remove them):

imported-ecore-project-layout4

We must now modify the plugin.xml (note that there’s no plugin.xml_gen anymore), so that the org.eclipse.emf.ecore.generated_package extension point contains the reference to the new GenModel file:

 

If you try the editor for the DSL it will still work; however, the Junit tests will fail with errors of this shape:

That’s because the generated StandaloneSetup does not register the EPackage anymore, see the diff:

imported-ecore-diff

All we need to do is to modify the StandaloneSetup in the src folder (NOT the generated one, since it will be overwritten by subsequent MWE2 workflow runs) and override the register method so that it performs the registration of the EPackage:

And now the Junit tests will run again.

Modifying the Ecore model

We can now customize our Ecore model, using the Ecore editor and the Properties view.

For example, we add the interface Element, with the method getName() and we make both Hello and Greeting implement this interface (they both have getName() thus the implementation of the interface is automatic).

imported-ecore-custom-ecore1 imported-ecore-custom-ecore2 imported-ecore-custom-ecore3

We also add a method getElements() to the Model class returning an Iterable<Element> (containing both the Hello and the Greeting objects)

imported-ecore-custom-ecore4

and we implement that method using an EAnnotation, using the source “http://www.eclipse.org/emf/2002/GenModel” and providing a body

imported-ecore-custom-ecore5 imported-ecore-custom-ecore6

With the following implementation

Let’s run the MWE2 workflow so that it will regenerate the EMF classes.

And now we can implement the validator method checking duplicates, using the new getElements() method and the fact that now both Hello and Greeting implement Element:

That’s all! I hope you found this tutorial useful :)

Be Sociable, Share!

Testing a plain SWT Application with SWTBot

Revision History
18 April 2014 Modified the SWTBot test so that it can be reused also in a test suite (see the comments to this post).

I happened to give a lecture at the University of Florence on Test Driven Development; besides the standard Junit tests I wanted to show the students also some functional tests with SWTBot. However, I did not want to introduce Eclipse views or dialogs, I just wanted to test a plain SWT application with SWTBot.

In the beginning, it took me some time to understand how to do that (I had always used SWTBot in the context of an Eclipse application); thanks to Mickael Istria, who assisted me via Skype, it ended up being rather easy.

You can find this example here: https://github.com/LorenzoBettini/junit-swtbot-example.

The SWT application is a simple dialog that computes the factorial of the given input (nothing fancy, its code can be seen here).

swtbot-test-example1

If we now want to test this SWT application with SWTBot, we can write an abstract base class that we use for our tests (see also the online code)

And we use this base class in our tests, for instance

There are a few things to note in the abstract base class:

  • You need to spawn the application in a new thread (the bot will run in a different thread)
  • You must start the application before creating the bot (otherwise the Display will be null)
  • after that you can simply use SWTBot API as you’re used to.

Note that the thread will create our window and then it will enter the event loop; this thread synchronizes with the @Before method (executed before each test), which creates the SWTBot (using the shell created by the thread). The @After method (executed after each test), will close our window, so that each test is independent from each other. The thread executes in an infinite loop, thus as soon as the shell is closed it will create a new one, etc.

Of course, this must be executed as a “Junit test”, NOT as a “Plug-in Junit test”, neither as a “SWTBot Test”, since we do not want any Eclipse application while running the test:

swtbot-test-example2

In the sources of the example you can find also the files to run the tests headlessly with Buckminster or with Maven/Tycho. Just enter the directory mathutils.build and

for Buckminster or

for Maven.

For Buckminster, you just need to save the launch configuration you used to run the test, and use the junit command:

For Tycho, you must specify <packaging>eclipse-test-plugin</packaging>, but without further configuration. This will default useUIHarness to false.

During the headless run, first the Junit tests for the implementation of the factorial will be executed (these are not interesting in the context of SWTBot) and then the SWTBot tests will be executed.

 

 

Be Sociable, Share!

Using the Xtend compiler in Buckminster builds

Up to now, I was always putting the Xtend generated Java files in my git repositories (for my Xtext projects), since I still hadn’t succeeded in invoking the Xtend standalone compiler in a Buckminster build. Dennis Hübner published a post with some hints on how to achieve that, but that never worked for me (and apparently it did not work for other users).

After some experiments, it seems I finally managed to trigger Xtend compilation in Buckminster builds, and in this post I’ll show the steps to achieve that (I’m using an example you can find on Github).

The main problems I had to solve were:

  • how to pass the classpath to the Xtend compiler
  • how to deal with chicken-and-egg problems (dependencies among Java and Xtend classes).

IMPORTANT: the build process described here uses a new flag for the Buckminster’s build command, which has been recently added; thus, you must make sure you have an updated version of Buckminster headless (from 4.3 repository).

The steps to perform can be applied to your projects as well; they are simple and easy to reproduce. In this blog post I’ll try to explain them in details.

This blog post assumes that you are already familiar with setting up a Buckminster build.

The example

The example I’m using is an Xtext DSL (just the Greeting example using Xbase), with many .xtend files and with the standard structure:

  • org.xtext.example.hellobuck, the runtime plugin,
  • org.xtext.example.hellobuck.ui, the ui plugin, which uses Xtend classes defined in the runtime plugin,
  • org.xtext.example.hellobuck.tests, the tests plugin, which uses Xtend classes defined in the runtime and in the ui plugin,
  • org.xtext.example.hellobuck.sdk, the SDK feature for the DSL.

Furthermore, we have two additional projects created by the Xtext Buckminster Wizard:

  • org.xtext.example.hellobuck.buckminster, the releng project,
  • org.xtext.example.hellobuck.site, the feature project for creating the p2 repository,

I blogged about the Xtext Buckminster Wizard in the past, and indeed this example is a fork of the example presented in that blog post.

Creating a launch configuration for the Xtend compiler

The first step consists in creating a Java launch configuration in the runtime plugin project that invokes the Xtend standalone compiler. This was shown in Dennis’ original post, but you need to change a few things. Here’s the XtendCompiler.launch file to put in the org.xtext.example.hellobuck runtime plugin project (of course you can call the launch file whateven you want):

This launch configuration can be reused in other projects, provided the highlighted lines are changed accordingly, since they refer to the containing project.

An important part of this launch configuration is the PROGRAM_ARGUMENTS that are passed to the Xtend compiler, in particular the -classpath argument. This was the main problem I experienced in the past (and that I saw in all the other posts in the forum): the Xtend compiler needs to find the Java classes your Xtend files depend upon and thus you need to pass a valid -classpath argument. But we can simply reuse the classpath of the containing project :)

Add dependency for Xtend standalone compiler

This launch configuration calls the Java application org.eclipse.xtend.core.compiler.batch.Main thus you must add a dependency on the corresponding bundle in your MANIFEST.MF. The bundle you need to depend on is org.eclipse.xtend.standalone (the dependency can be optional):

xtend_standalone_dependency

 

Test the launch in your workbench

You can test this launch configuration from Eclipse, with Run As => Java Application. In the Console view you should see something like:

This will give you confidence that the launch configuration works correctly and that all dependencies for invoking the Xtend compiler are in place.

Add an XtendCompiler.launch in the other projects

You must now add an XtendCompiler.launch in all the other projects containing Xtend files. In our example we must add it to the ui and the tests projects.

You can copy the one you have already created but MAKE SURE you update the relevant 3 parts according to the containing projects! See the highlighted lines above.

NOTE: you do NOT need to add a dependency on org.eclipse.xtend.standalone in the MANIFEST.MF of the ui and tests projects: they depend on the runtime plugin project which already has that dependency.

You may want to run the XtendCompiler.launch also in these projects from the Eclipse workbench, again to get confidence that you configured the launch configurations correctly.

IMPORTANT: when the Xtend compiler compiles the files in the ui and tests project, you will see some ERROR lines, e.g.,

From what I understand, these errors do not prevent the Xtend compiler to successfully generate Java files (see the final INFO line) and the procedure terminates successfully. Thus, you can ignore these errors. If the Xtend compiler really cannot produce Java files it will terminate with a final error.

Configure the headless build

Now it’s time to configure the Buckminster headless build so that it runs the Xtend compiler. We created .launch files because one of the cool things of Buckminster is that it can seamlessly run the launch files.

The tricky part here is that since we perform a clean build, there is a chicken-and-egg scenario

  • no Java files have been compiled,
  • most Java files import Java files created by Xtend
  • the Xtend files import Java classes

To solve these problems we perform an initial clean build; this will run the Java compiler and such compilation will terminate with errors. We expect that, due to the chicken-and-egg situation. However, this will create enough .class files to run the Xtend compiler! It is important to run the build command with the (new) flag –continueonerror, otherwise the whole build will fail.

After running XtendCompiler.launch in the org.xtext.example.hellobuck runtime project, we run another build –continueonerror so that the Java files generated by the Xtend compiler will be compiled by Java. We then proceed similarly for the ui and the tests project:

Then, your build can proceed as usual (at this point I prefer to perform a clean build): run the tests (both plain Junit and Plug-in Junit tests) and create the p2 repository:

The complete commands file can be seen here.

Executing the headless build

You can now run your headless build on your machine or on Jenkins.

My favorite way of doing that is by using an ANT script. The whole ANT script can be found in the example.

This script also automatically installs Buckminster headless if not present.

Before executing the commands file, it also removes the contents of the xtend-gen folder in all the projects; this way you are sure that no stale generated Java files are there.

Of course, you should now remove the xtend-gen folder from your git repository (and put it in the .gitignore file).

In Jenkins you can configure an Invoke Ant build step as shown in the screenshot (“Start Xvfb before the build, and shut it down after” is required to execute Plug-in Junit tests; we also pass an option to install Buckminster headless in the job’s workspace).

xtext-xtend-buckminster-jenkins

Try the example

You can just clone the git repository, and then

As noted above, this will also install Buckminster headless if not found in the location specified by the property buckminster.home. This script will take some time, especially the first time, since it will materialize the target platform.

Hope you find this blog post useful! :)

Be Sociable, Share!

The book on Xtext is out

My book on Xtext, “Implementing Domain-Specific Languages with Xtext and Xtend” is now available on Packt website! Get it while it’s hot! :)

You can find the outline and an example chapter at

http://www.packtpub.com/implementing-domain-specific-languages-with-xtext-and-xtend/book

Many thanks to the reviewers of the book: Jan Koehnlein, Henrik Lindberg, Pedro J. Molina, and Sebastian Zarnekow!

The sources of the examples presented in the book are available at https://github.com/LorenzoBettini/packtpub-xtext-book-examples

0304OS_mockupcover_normalI would also like to thank all the people from Packt I dealt with.

 

Be Sociable, Share!