Category Archives: Tutorials

Publish an Eclipse p2 repository on Sourceforge with rsync

This can be seen as a follow-up post of my previous post on building Eclipse p2 composite repositories. In this blog post I’ll show an automatic way for publishing an Eclipse p2 (composite) repository (a.k.a. update site) on Sourceforge, using rsync for synchronization. You may find online many posts about publishing update sites on Github pages and recently on bintray. (as a reminder, rsync is a one-way synchronization tool, and we assume that the master replica is the one on sourceforge; rysnc, being a synchronization tool, will only transfer the changed files during synchronization).

I prefer sourceforge for some reasons:

  • you have full and complete access to the files upload system either with a shell or, most importantly for the technique I’ll describe here, with rsync. From what I understand, instead, bintray will manage the binary artifacts for you;
  • in order to create and update a p2 composite site you must have access to the current file system layout of the p2 update site, which I seem to understand is not possible with bintray;
  • you have download statistics and your artifacts will automatically mirrored in sourceforge’s mirrors.

By the way: you can store your git repository anywhere you want, and publish the binaries on sourceforge. (see this page and this other page).

I’ll reuse the same example of the previous post, the repository found here https://github.com/LorenzoBettini/p2composite-example, where you find all the mechanisms for creating and updating a p2 composite repository.

The steps of the technique I’ll describe here can be summarized as follows: when it comes to release a new child in the p2 composite update site (possibly already published on Sourceforge), the following steps are performed during the Maven/Tycho build

  1. Use rsync to get an update local version of the published p2 composite repository somewhere in your file system (this includes the case when you never released a version, so you’ll get a local empty directory)
  2. Build the p2 repository with Tycho
  3. Add the above created p2 repository as a new child in the local p2 composite repository (this includes the case where you create a new composite repository, since that’s your first release)
  4. Use rsync to commit the changes back to the remote p2 composite repository

Since we use rsync, we have many opportunities:

  • we’re allowed to manually modify (i.e., from outside the build infrastructure) the p2 composite repository, for instance by removing a child repository containing a wrong release, and commit the changes back;
  • we can release from any machine, notably from Jenkins or Hudson, since we always make sure to have a synchronized local version of the released p2 composite repository.

Prepare the directory on Sourceforge

This assumes that you have an account on Sourceforge, that you have registered a project. You need to create the directory that will host your p2 composite repository in the “Files” section.

For this example I created a new project eclipseexampleshttps://sourceforge.net/projects/eclipseexamples/, and I plan to store the p2 composite in the sourceforge file system on this path: p2composite.example/updates.

So I’ll create the directory structure accordingly (using the “Add Folder” button:

sourceforge create folder structure 1 sourceforge create folder structure 2 sourceforge create folder structure 3

Ant script for rsync

I’m using an ant script since it’s easy to call that from Maven, and also manually from the command line. This assumes that you have already rsync installed on your machine (or in the CI server from where you plan to perform releases).

This ant file is meant to be completely reusable.

Here’s the ant file

We have a macro for invoking rsync with the desired options (have a look at rsync documentation for understanding their meaning, but it should be straightforward to get an idea).

In particular, the transfer will be done with ssh, so you must have an ssh key pair, and you must have put the public key on your account on sourceforge. Either you created the key pair without a passphrase (e.g., for releasing from a CI server of your own), or you must make sure you have already unlocked the key pair on your local machine (e.g., with an ssh-agent, or with a keyring, depending on your OS).

The arguments source and dest will depend on whether we’re doing an update or a commit (see the two ant targets). If you define the property dryrun as -n then you can simulate the synchronization (both for update and commit); this is important at the beginning to make sure that you synchronize what you really mean to synchronize. Recall that when you perform an update, specifying the wrong local directory might lead to a complete deletion of that directory (the same holds for commit and the remote directory). Moreover, source and destinations URLs in rsync have a different semantics depending on whether they terminate with a slash or not, so make sure you understand them if you need to customize this ant file or to pass special URLs.

The properties rsync.remote.dir and rsync.local.dir will be passed from the Tycho build (or from the command line if you call the ant script directly). Once again, please use the dryrun property until you’re sure that you’re synchronizing the right paths (both local and remote).

Releasing during the Tycho build

Now we just need to call this ant’s targets appropriately from the Tycho build; I’ll do that in the pom.xml of the project that builds and updates the composite p2 repository.

Since I don’t want to push a new release on the remote site on each build, I’ll configure the plugins inside a profile (it’s up to you to decide when to release): here’s the new part:

Now the URL to access a remote path on sourceforge with ssh has the following shape

<username>,<project>@frs.sourceforge.net:/home/frs/project/<project>/<path>

So in my case I specified (again, the final / is crucial for what we want to synchronize with rsync, see the note above):

lbettini,eclipseexamples@frs.sourceforge.net:/home/frs/project/eclipseexamples/p2composite.example/updates/

The local URL specifies where the local p2 composite site is stored (see the previous post), in this example it defaults to

${user.home}/p2.repositories/updates/

Again, the final / is crucial.

We configured the maven-antrun-plugin with two executions:

  1. before updating the p2 composite update site (phase prepare-package) we make sure we have a synchronized local version of the repository
  2. after updating the p2 composite update site (phase verify) we commit the changes to the remote repository
  3. That’s all :)

Let’s try it

Of course, if you want to try it, you need a project on sourceforge and a directory on that project’s Files section (and you’ll have to change the URLs accordingly in the pom file).

To perform a release we need to call the build enabling the profile release-composite, and specify at least verify as goal:

Let’s say we still haven’t released anything.

Since the remote directory is empty, in our local file system we’ll simply have the directory created. In the end of the build, the composite site is created and the remote directory will be synchronized with our local contents:

Let’s have a look at the remote directory, it will contain the create p2 composite site

sourceforge uploaded artifacts 1

sourceforge uploaded artifacts 2

Let’s perform another release; Our local copy is up-to-date so we won’t receive anything during the update phase, but then we’ll commit another release

Let’s have a look at sourceforge and see the new release

sourceforge uploaded artifacts 3

Let’s remove our local copy and try to perform another release, this time the update phase will make sure our local composite repository is synchronized with the remote site (we’ll get the whole composite site we had already released), so that when we add another composite child we’ll update our local composite repository; then we’ll commit the changes to the server (again, by uploading only the modified files, i.e., the compositeArtifacts.xml and compositeContent.xml and the new directory with the new child repository:

Again, the remote site is correctly updated

sourceforge uploaded artifacts 4

Providing the URL of your p2 repository

Now that you have your p2 repository on sourceforge, you only need to give your users the URL to use for installing your features in Eclipse.

You have two forms for the URL

  • This will use the mirror infrastructure of sourceforge: http://sourceforge.net/projects/<project>/files/<path>
  • This will bypass mirrors: http://master.dl.sourceforge.net/project/<project>/<path>

If you use the mirror form, when installing in Eclipse (or provisioning a target platform) you’ll see warnings on the console of the shape

But it’s safe to ignore them.

For our example the URL can be one of the following:

  • With mirrors: http://sourceforge.net/projects/eclipseexamples/files/p2composite.example/updates/
  • Main site: http://master.dl.sourceforge.net/project/eclipseexamples/p2composite.example/updates/

You may want to try them both in Eclipse.

Please keep in mind that you may hit some unavailability errors now and then, if sourceforge sites are down for maintenance or unreachable for any reason… but that’s not much different when you hit a bad Eclipse mirror, or the main Eclipse download site is down… I guess no hosting site is perfect anyway ;)

I hope you find this blog post useful, Happy releasing! :)

 

Be Sociable, Share!

Creating p2 composite repositories during the build

I like to build p2 composite repositories for all my Eclipse projects, to keep all the versions available for consumption.

Quoting from https://wiki.eclipse.org/Equinox/p2/Composite_Repositories_(new)

The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

The nice thing of composite repositories is that they can be nested at any level. Thus, I like to have nested composite repositories according to the major.minor, major.minor.service.qualifier.

Thus the layout of the p2 composite repository should be similar to the following screenshot

p2composite1

Note that the name of the directories that contain a standard p2 repository have the same name of the contained feature.

The key points of a p2 composite repository are the two files compositeArtifacts.xml and compositeContent.xml. Their structure is simple, e.g.,

Note that a child location is intended relative to the path of these files; you can also specify absolute paths, not to mention http urls to other remote p2 sites.

The structure is not that complex, so you can also create it by hand; but keeping it up to date might not be that trivial. With that respect, p2 provides some ant tasks for managing composite repositories (creating, adding an entry, removing an entry), and that’s my favorite way to deal with composite repositories. I’ll detail what I usually do in this blog post, in particular, how to create (or update) a p2 composite repository with a new entry during the build.

The ant file is completely reusable and customizable by passing properties; you can reuse it as it is, after you setup your pom.xml as detailed below.

In this blog post I’ll show how to do that with Maven/Tycho, but the same procedure can be done in a Buckminster build (as I’ll hint at the end).

I’ll use a simple example, https://github.com/LorenzoBettini/p2composite-example, consisting of a plug-in project, a feature project, a project for the site, and a releng project (a Maven/Tycho parent project). The plug-in and feature project are not interesting in this context: the most interesting one is the site project (a Tycho eclipse-repository packaging type).

Of course, in order to run such ant tasks, you must run them using the org.eclipse.ant.core.antRunner application. Buckminster, as an Eclipse product, already contains that application. With Tycho, you can use the tycho-eclipserun-plugin, to run an Eclipse application from Maven.

We use this technique for releasing a new version of our EMF-Parsley Eclipse project. We do that directly from our Hudson HIPP instance; the idea is that the location of the final main composite site is the one that will be served through HTTP from the download.eclipse.org. We have a dedicated Hudson job that will release a new version and put it in the composite repository.

The ant file

The internal details of this ant files are not necessary to reuse it, so you can skip the first part of this section (you only need to know the main properties to pass). Of course, if you read it and you have suggestions for improve it, I’d be very grateful :)

The ant file consists of some targets and macro definitions.

The main macro definition is the one invoking the p2 ant task:

Note that we’ll also create a p2.index file. I prefer not to compress the compositeArtifacts.xml and compositeContent.xml files for easier inspection or manual modification, but you can compress them setting the “compressed” to “true” property above.

This macro will be called twice in the main task

First of all, this task will copy the p2 repository created during the build in the correct place inside the nested p2 composite repository.

Then, it will create or update the composite site for the nested repository major.minor, and then it will create or update the composite site for the main site (the one storing all the versions). The good thing about these ant tasks is that if you add a child location that already exists they won’t complain (though you can set a property to make them fail in such situations); this is crucial for updating the main repository, since most of the time you will not release a new major.minor.

This target calls (i.e., depends on) another target to compute the properties to pass to the macrodef, according to the information passed from the pom.xml

Default properties (that can be modified by passing a value from the pom.xml file):

  • software.download.area: the absolute path of the parent folder for the composite p2 site (default is “p2.repositories” in your home directory)
  • updates.dir: the relative path of the composite p2 site (default is “updates”); this is relative to software.download.area

Thus, by default, the main p2 composite update site will end in ${user.home}/p2.repositories/updates. As hinted in the beginning, this can be any absolute local file system path; in EMF-Parsley Eclipse, since we release from Hudson, it will be the path served by the Eclipse we server download.eclipse.org. So we specify the two above properties accordingly.

These are the properties that must be passed from the pom.xml file

  • site.label: the main label that will appear in the composite site (and that will be recorded in the “Eclipse available sites”). The final label will be “${site.label} All Versions” for the main site and “${site.label} <major.minor>” for the nested composite sites.
  • project.build.directory: the location of the p2 repository created during the build (usually of the shape <project.id>/target/repository)
  • unqualifiedVersion: the version without qualifier (e.g., 1.1.0)
  • buildQualifier: the replaced qualifier in the built version

Note that except for the first property, the other ones have exactly the same name as the ones in Tycho (and are set by Tycho directly during the build, so we’ll reuse them).

The ant file will use an additional target (not shown here, but you’ll find it in the sources of the example) to extract the major.minor part of the passed version.

Calling the ant task from pom.xml

Now, we only need to execute the above ant task from the pom.xml file of the eclipse-repository project,

ATTENTION: in the following snipped, for the sake of readability, I split the <appArgLine> into several lines, but in your pom.xml it must be exactly in one (long) line.

As I said, you should pass site.label as you see fit (for the other properties you can use the default).

You may want to put this plugin specification inside a Maven profile, that you activate only when you are actually doing a release (see, e.g., what we do in this pom.xml, taken from our EMF-Parsley Eclipse project).

Try the example

Let’s simulate some releases:

To see what you get, just clone the repository found here https://github.com/LorenzoBettini/p2composite-example, cd to p2composite.example.tycho and run

After Maven finished downloading all the dependencies you should see something like

And here’s the directory layout of your ${user.home}/p2.repositories

p2composite2Run the command again, and you’ll get another child in the nested composite repository 1.0 (the qualifier has been replaced automatically with the new timestamp):

p2composite3Let’s increase the service number, i.e., 1.0.1, (using the tycho-versions-plugin) and rebuild:

and the new child will still be in 1.0 folder:

p2composite4Let’s increase the minor number, i.e., 1.1.0 and rebuild

and you’ll get another major.minor child repository

p2composite5Let’s increase the major number, i.e., 2.0.0

and you’ll get another major.minorp2composite6and so on :)

With Buckminster

As I hinted before, with Buckminster you can directly call the p2 ant tasks, since they are included in the Buckminster headless product. You will only need to add custom actions in the .cspec (or in the .cspex if you’re inside a plugin or feature project) that call the ant task passing the right properties. An example can be found here. This refers to a slightly different ant file from the one shown in this blog post, but the idea is still the same.

Possible Improvements

You may want to add another nesting level, e.g., major -> major.minor etc… This should be straightforward: you just need to call the macrodef another time, and compute the main update site directory differently.

Hope this helps.

 

 

Be Sociable, Share!

Switching to Xcore in your Xtext language

This is a followup of my previous post, Switching from an inferred Ecore model to an imported one in your Xtext grammar. The rationale for switching to manually maintained metamodel can be found in the previous post. In this post, instead of using an Ecore file, we will use Xcore,

Xcore is an extended concrete syntax for Ecore that, in combination with Xbase, transforms it into a fully fledged programming language with high quality tools reminiscent of the Java Development Tools. You can use it not only to specify the structure of your model, but also the behavior of your operations and derived features as well as the conversion logic of your data types. It eliminates the dividing line between modeling and programming, combining the advantages of each.

I took inspiration from Jan Köhnlein’s blog post; after switching to a manually maintained Ecore in Xsemantics, I felt the need to further switch to Xcore, since I had started to write many operation implementations in the metamodel, and while you can do that in Ecore, using Xcore is much easier :) Thus in my case I was starting from an existing language, not to mention the use of Xbase (not covered in Jan’s post). Things were not easy, but once the procedure works, it is easily reproducible, and I’ll detail this for a smaller example.

So first of all, let’s create an Xtext project, org.xtext.example.hellocustomxcore, (you can find the sources of this example online at https://github.com/LorenzoBettini/Xtext2-experiments); the grammar of the DSL is not important: this is just an example. We will first start developing the DSL using the automatic Ecore model inference and later we will switch to Xcore.

(the language is basically the same of the previous post).

The grammar of this example is as follows:

and we run the MWE2 generator.

To have something working, we also write an inferrer

With this DSL we can write programs of the shape (nothing interesting, this is just an example)

Now, let’s say we want to check in the validator that there are no elements with the same name; since both “Hello” and “Greeting” have the feature name, we can introduce in the metamodel a common interface with the method getName(). OK, we could achieve this also by introducing a fake rule in the Xtext grammar, but let’s do that with Xcore.

Switching to Xcore

Of course, first of all, you need to install Xcore in your Eclipse.

Before we use the export wizard, we must make sure we can open the generated .genmodel with the “EMF Generator” editor (otherwise the export will fail). If you get an error opening such editor about resolving proxy to JavaJVMTypes.ecore like in the following screenshot…

gemodel_problems

..then we must tweak the generated .genmodel and add a reference to JavaVMTypes.genmodel: open HelloXcore.genmodel with the text editor, and search for the part (only the relevant part of the line is shown)

and add the reference to the JavaVMTypes.genmodel:

Since we’re editing the .genmodel file, we also take the chance to modify the output folder for the model files to emf-gen (see also later in this section for adding emf-gen as a source folder):

And we remove the properties that relate to the edit and the editor plug-ins (since we don’t want to generate them anyway):

Now save the edited file, refresh the file in the workspace by selecting it and pressing F5 (yes, also this operation seems to be necessary), and this time you should be able to open it with the “EMF Generator” editor. We can go on exporting the Xcore file.

We want the files generated by Xcore to be put into the emf-gen source folder; so we add a new source folder to our project, say emf-gen, where all the EMF classes will be generated; we also make sure to include such folder in the build.properties file.

First, we create an .xcore file starting from the generated .genmodel file:

  • navigate to the HelloXcore.genmodel file (it is in the directory model/generated)
  • right click on it and select “Export Model…”
  • in the dialog select “Xcore”
    gemodel_export1
  • The next page should already present you with the right directory URI
    gemodel_export2
  • In the next page select the package corresponding to our DSL, org.xtext.example.helloxcore.helloxcore (and choose the file name for the exported .xcore file corresponding Helloxcore.xcore file)
    gemodel_export3
  • Then press Finish
  • If you get an error about a missing EObjectDescription, remove the generated (empty) Helloxcore.xcore file, and just repeat the Export procedure from the start, and the second time it should hopefully work

gemodel_export4

The second time, the procedure should terminate successfully with the following result:

  • The xcore file, Helloxcore.xcore has been generated in the same directory of the .genmodel file (and the xcore file is also opened in the Xcore editor)
  • A dependency on org.eclipse.emf.ecore.xcore.lib has been added to the MANIFEST.MF
  • The new source folder emf-gen is full of compilation errors

gemodel_export5

Remember that the model files will be automatically generated when you modify the .xcore file (one of the nice things of Xcore is indeed the automatic building).

Fixing the Compilation Errors

These compilation errors are expected since Java files for the model are both in the src-gen and in the emf-gen folder. So let’s remove the ones in the src-gen folders (we simply delete the corresponding packages):

gemodel_export6

After that, everything compile fines!

Now, you can move the Helloxcore.xcore file in the “model” directory, and remove the “model/generated” directory.

Modifying the mwe2 workflow

In the Xtext grammar, HelloXcore.xtext, we replace the generate statement with an import:

The DirectoryCleaner fragment related the “model” directory should be removed (otherwise it will remove our Helloxcore.xcore file as well); and we don’t need it anymore after we manually removed the generated folder with the generated .ecore and .genmodel files.

Then, in the language part, you need to loadResource the XcoreLang.xcore, the Xbase and Ecore ecore and genmodel, and finally the xcore file you have just exported, Helloxcore.xcore.

We can comment the ecore.EMFGeneratorFragment (since we manually maintain the metamodel from now on).

The MWE2 files is now as follows (I highlighted the modifications):

Before running the workflow, you also need to add org.eclipse.emf.ecore.xcore as a dependency in your MANIFEST.MF.

We can now run the mwe2 workflow, which should terminate successfully.

We must now modify the plugin.xml (note that there’s no plugin.xml_gen anymore), so that the org.eclipse.emf.ecore.generated_package extension point contains the reference to the our Xcore file:

Fixing Junit test problems

As we saw in the previous post, Junit tests do not work anymore with errors of the shape

All we need to do is to modify the StandaloneSetup in the src folder (NOT the generated one, since it will be overwritten by subsequent MWE2 workflow runs) and override the register method so that it performs the registration of the EPackage (as it used to do before):

And now the Junit tests will run again.

Modifying the metamodel with Xcore

We can now customize our metamodel, using the Xcore editor.

For example, we add the interface Element, with the method getName() and we make both Hello and Greeting implement this interface (they both have getName() thus the implementation of the interface is automatic).

Using the Xcore editor is easy, and you have content assist; as soon as you press save, the Java files will be automatically regenerated:

xcore_modify1

We also add a method getElements() to the Model class returning an Iterable<Element>(containing both the Hello and the Greeting objects). This time, with Xcore, it is really easy to do so (compare that with the procedure of the previous post, requiring the use of EAnnotation in the Ecore file), since Xcore uses Xbase expression syntax for defining the body of the operations (with full content assist, not to mention automatic import statement insertions). See also the generated Java code on the right:

xcore_modify2

And now we can implement the validator method checking duplicates, using the new getElements() method and the fact that now both Hello and Greeting implement Element:

That’s all! I hope you found this tutorial useful :)

 

Be Sociable, Share!

Switching from an inferred Ecore model to an imported one in your Xtext grammar

When you use Xtext for developing your language the Ecore model for the AST is automatically derived/inferred from the grammar. If your DSL is simple, this automatic meta-model inference is usually enough. However, there might be cases where you need more control on the meta-model and in such cases you will want to switch from an inferred Ecore model to a an imported one, which you will manually maintain. This is documented in the Xtext documentation, and in some blog posts. When I needed to switch to an imported Ecore model for Xsemantics, things have not been that easy, so I thought to document the steps to perform some switching in this tutorial, using a simple example. (I should have talked about that in my Xtext book, but at that time I ran out of pages so there was no space left for this subject :)

So first of all, let’s create an Xtext project, org.xtext.example.hellocustomecore, (you can find the sources of this example online at https://github.com/LorenzoBettini/Xtext2-experiments); the grammar of the DSL is not important: this is just an example. We will first start developing the DSL using the automatic Ecore model inference and later we will switch to an imported Ecore.

The grammar of this example is as follows (to make things more interesting, we will also use Xbase):

and we run the MWE2 generator.

To have something working, we also write an inferrer

With this DSL we can write programs of the shape (nothing interesting, this is just an example)

Now, let’s say we want to check in the validator that there are no elements with the same name; since both “Hello” and “Greeting” have the feature name, we can introduce in the Ecore model a common interface with the method getName(). OK, we could achieve this also by introducing a fake rule in the Xtext grammar, but let’s switch to an imported Ecore model so that we can manually modify that.

Switching to an imported Ecore model

First of all, we add a new source folder to our project (you must create it with File -> New -> Source Folder, or if you create it as a normal folder, you then must add it as a source folder with Project -> Properties -> Lava Build Path: Source tab), say emf-gen, where all the EMF classes will be generated; we also make sure to include such folder in the build.properties file:

Remember that, at the moment, the EMF classes are generated into the src-gen folder, together with other Xtext artifacts (e.g., the ANTLR parser):

imported-ecore-project-layout1

Xtext generates the inferred Ecore model file and the GenModel file into the folder model/generated

imported-ecore-project-layout2

This is the new behavior introduced in Xtext 2.4.3 by the fragment ecore.EMFGeneratorFragment that replaces the now deprecated ecore.EcoreGeneratorFragment; if you still have the deprecated fragment in your MWE2 files, then the Ecore and the GenModel are generated in the src-gen folder.

Let’s rename the “generated” folder into “custom” (if in the future for any reason we want to re-enable Xtext Ecore inference, our custom files will not be overwritten):

imported-ecore-project-layout3

NOTE: if you simply move the .ecore and .genmodel file into the directory model, you will not be able to open the .ecore file with the Ecore editor: this is due to the fact that this Ecore file refers to Xbase Ecore models with a relative path; in that case you need to manually adjust such references by opening the .ecore file with the text editor.

From now on, remember, we will manually manage the Ecore file.

Now we change the GenModel file, so that the EMF model classes are generated into emf-gen instead of src-gen:

imported-ecore-genmodelWe need to change the MWE2 file as follows:

  • Enable the org.eclipse.emf.mwe2.ecore.EcoreGenerator fragment that will generate the EMF classes using our custom Ecore file and GenModel file; indeed, you must refer to the custom GenModel file; before that we also run the DirectoryCleaner on the emf-gen folder (this way, each time the EMF classes are generated, the previous classes are wiped out); enable these two parts right after the StandaloneSetup section;
  • Comment or remove the DirectoryCleaner element for the model directory (otherwise the workflow will remove our custom Ecore and GenModel files);
  • In the language section we load our custom Ecore file,
  • and we disable ecore.EMFGeneratorFragment (we don’t need that anymore, since we don’t want the Ecore model inference)

The MWE2 files is now as follows (I highlighted the modifications):

We add the dependency org.eclipse.xtext.ecore in the MANIFEST.MF:imported-ecore-manifestIn the Xtext grammar we replace the generate statement with an import statement:

Now we’re ready to run the MWE2 workflow, and you should get no error (if you followed all the above instructions); you can see that now the EMF model classes are generated into the emf-gen folder (the corresponding packages in the src-gen folders are now empty and you can remove them):

imported-ecore-project-layout4

We must now modify the plugin.xml (note that there’s no plugin.xml_gen anymore), so that the org.eclipse.emf.ecore.generated_package extension point contains the reference to the new GenModel file:

 

If you try the editor for the DSL it will still work; however, the Junit tests will fail with errors of this shape:

That’s because the generated StandaloneSetup does not register the EPackage anymore, see the diff:

imported-ecore-diff

All we need to do is to modify the StandaloneSetup in the src folder (NOT the generated one, since it will be overwritten by subsequent MWE2 workflow runs) and override the register method so that it performs the registration of the EPackage:

And now the Junit tests will run again.

Modifying the Ecore model

We can now customize our Ecore model, using the Ecore editor and the Properties view.

For example, we add the interface Element, with the method getName() and we make both Hello and Greeting implement this interface (they both have getName() thus the implementation of the interface is automatic).

imported-ecore-custom-ecore1 imported-ecore-custom-ecore2 imported-ecore-custom-ecore3

We also add a method getElements() to the Model class returning an Iterable<Element> (containing both the Hello and the Greeting objects)

imported-ecore-custom-ecore4

and we implement that method using an EAnnotation, using the source “http://www.eclipse.org/emf/2002/GenModel” and providing a body

imported-ecore-custom-ecore5 imported-ecore-custom-ecore6

With the following implementation

Let’s run the MWE2 workflow so that it will regenerate the EMF classes.

And now we can implement the validator method checking duplicates, using the new getElements() method and the fact that now both Hello and Greeting implement Element:

That’s all! I hope you found this tutorial useful :)

Be Sociable, Share!

Testing a plain SWT Application with SWTBot

Revision History
18 April 2014 Modified the SWTBot test so that it can be reused also in a test suite (see the comments to this post).

I happened to give a lecture at the University of Florence on Test Driven Development; besides the standard Junit tests I wanted to show the students also some functional tests with SWTBot. However, I did not want to introduce Eclipse views or dialogs, I just wanted to test a plain SWT application with SWTBot.

In the beginning, it took me some time to understand how to do that (I had always used SWTBot in the context of an Eclipse application); thanks to Mickael Istria, who assisted me via Skype, it ended up being rather easy.

You can find this example here: https://github.com/LorenzoBettini/junit-swtbot-example.

The SWT application is a simple dialog that computes the factorial of the given input (nothing fancy, its code can be seen here).

swtbot-test-example1

If we now want to test this SWT application with SWTBot, we can write an abstract base class that we use for our tests (see also the online code)

And we use this base class in our tests, for instance

There are a few things to note in the abstract base class:

  • You need to spawn the application in a new thread (the bot will run in a different thread)
  • You must start the application before creating the bot (otherwise the Display will be null)
  • after that you can simply use SWTBot API as you’re used to.

Note that the thread will create our window and then it will enter the event loop; this thread synchronizes with the @Before method (executed before each test), which creates the SWTBot (using the shell created by the thread). The @After method (executed after each test), will close our window, so that each test is independent from each other. The thread executes in an infinite loop, thus as soon as the shell is closed it will create a new one, etc.

Of course, this must be executed as a “Junit test”, NOT as a “Plug-in Junit test”, neither as a “SWTBot Test”, since we do not want any Eclipse application while running the test:

swtbot-test-example2

In the sources of the example you can find also the files to run the tests headlessly with Buckminster or with Maven/Tycho. Just enter the directory mathutils.build and

for Buckminster or

for Maven.

For Buckminster, you just need to save the launch configuration you used to run the test, and use the junit command:

For Tycho, you must specify <packaging>eclipse-test-plugin</packaging>, but without further configuration. This will default useUIHarness to false.

During the headless run, first the Junit tests for the implementation of the factorial will be executed (these are not interesting in the context of SWTBot) and then the SWTBot tests will be executed.

 

 

Be Sociable, Share!

Using the Xtend compiler in Buckminster builds

Up to now, I was always putting the Xtend generated Java files in my git repositories (for my Xtext projects), since I still hadn’t succeeded in invoking the Xtend standalone compiler in a Buckminster build. Dennis Hübner published a post with some hints on how to achieve that, but that never worked for me (and apparently it did not work for other users).

After some experiments, it seems I finally managed to trigger Xtend compilation in Buckminster builds, and in this post I’ll show the steps to achieve that (I’m using an example you can find on Github).

The main problems I had to solve were:

  • how to pass the classpath to the Xtend compiler
  • how to deal with chicken-and-egg problems (dependencies among Java and Xtend classes).

IMPORTANT: the build process described here uses a new flag for the Buckminster’s build command, which has been recently added; thus, you must make sure you have an updated version of Buckminster headless (from 4.3 repository).

The steps to perform can be applied to your projects as well; they are simple and easy to reproduce. In this blog post I’ll try to explain them in details.

This blog post assumes that you are already familiar with setting up a Buckminster build.

The example

The example I’m using is an Xtext DSL (just the Greeting example using Xbase), with many .xtend files and with the standard structure:

  • org.xtext.example.hellobuck, the runtime plugin,
  • org.xtext.example.hellobuck.ui, the ui plugin, which uses Xtend classes defined in the runtime plugin,
  • org.xtext.example.hellobuck.tests, the tests plugin, which uses Xtend classes defined in the runtime and in the ui plugin,
  • org.xtext.example.hellobuck.sdk, the SDK feature for the DSL.

Furthermore, we have two additional projects created by the Xtext Buckminster Wizard:

  • org.xtext.example.hellobuck.buckminster, the releng project,
  • org.xtext.example.hellobuck.site, the feature project for creating the p2 repository,

I blogged about the Xtext Buckminster Wizard in the past, and indeed this example is a fork of the example presented in that blog post.

Creating a launch configuration for the Xtend compiler

The first step consists in creating a Java launch configuration in the runtime plugin project that invokes the Xtend standalone compiler. This was shown in Dennis’ original post, but you need to change a few things. Here’s the XtendCompiler.launch file to put in the org.xtext.example.hellobuck runtime plugin project (of course you can call the launch file whateven you want):

This launch configuration can be reused in other projects, provided the highlighted lines are changed accordingly, since they refer to the containing project.

An important part of this launch configuration is the PROGRAM_ARGUMENTS that are passed to the Xtend compiler, in particular the -classpath argument. This was the main problem I experienced in the past (and that I saw in all the other posts in the forum): the Xtend compiler needs to find the Java classes your Xtend files depend upon and thus you need to pass a valid -classpath argument. But we can simply reuse the classpath of the containing project :)

Add dependency for Xtend standalone compiler

This launch configuration calls the Java application org.eclipse.xtend.core.compiler.batch.Main thus you must add a dependency on the corresponding bundle in your MANIFEST.MF. The bundle you need to depend on is org.eclipse.xtend.standalone (the dependency can be optional):

xtend_standalone_dependency

 

Test the launch in your workbench

You can test this launch configuration from Eclipse, with Run As => Java Application. In the Console view you should see something like:

This will give you confidence that the launch configuration works correctly and that all dependencies for invoking the Xtend compiler are in place.

Add an XtendCompiler.launch in the other projects

You must now add an XtendCompiler.launch in all the other projects containing Xtend files. In our example we must add it to the ui and the tests projects.

You can copy the one you have already created but MAKE SURE you update the relevant 3 parts according to the containing projects! See the highlighted lines above.

NOTE: you do NOT need to add a dependency on org.eclipse.xtend.standalone in the MANIFEST.MF of the ui and tests projects: they depend on the runtime plugin project which already has that dependency.

You may want to run the XtendCompiler.launch also in these projects from the Eclipse workbench, again to get confidence that you configured the launch configurations correctly.

IMPORTANT: when the Xtend compiler compiles the files in the ui and tests project, you will see some ERROR lines, e.g.,

From what I understand, these errors do not prevent the Xtend compiler to successfully generate Java files (see the final INFO line) and the procedure terminates successfully. Thus, you can ignore these errors. If the Xtend compiler really cannot produce Java files it will terminate with a final error.

Configure the headless build

Now it’s time to configure the Buckminster headless build so that it runs the Xtend compiler. We created .launch files because one of the cool things of Buckminster is that it can seamlessly run the launch files.

The tricky part here is that since we perform a clean build, there is a chicken-and-egg scenario

  • no Java files have been compiled,
  • most Java files import Java files created by Xtend
  • the Xtend files import Java classes

To solve these problems we perform an initial clean build; this will run the Java compiler and such compilation will terminate with errors. We expect that, due to the chicken-and-egg situation. However, this will create enough .class files to run the Xtend compiler! It is important to run the build command with the (new) flag –continueonerror, otherwise the whole build will fail.

After running XtendCompiler.launch in the org.xtext.example.hellobuck runtime project, we run another build –continueonerror so that the Java files generated by the Xtend compiler will be compiled by Java. We then proceed similarly for the ui and the tests project:

Then, your build can proceed as usual (at this point I prefer to perform a clean build): run the tests (both plain Junit and Plug-in Junit tests) and create the p2 repository:

The complete commands file can be seen here.

Executing the headless build

You can now run your headless build on your machine or on Jenkins.

My favorite way of doing that is by using an ANT script. The whole ANT script can be found in the example.

This script also automatically installs Buckminster headless if not present.

Before executing the commands file, it also removes the contents of the xtend-gen folder in all the projects; this way you are sure that no stale generated Java files are there.

Of course, you should now remove the xtend-gen folder from your git repository (and put it in the .gitignore file).

In Jenkins you can configure an Invoke Ant build step as shown in the screenshot (“Start Xvfb before the build, and shut it down after” is required to execute Plug-in Junit tests; we also pass an option to install Buckminster headless in the job’s workspace).

xtext-xtend-buckminster-jenkins

Try the example

You can just clone the git repository, and then

As noted above, this will also install Buckminster headless if not found in the location specified by the property buckminster.home. This script will take some time, especially the first time, since it will materialize the target platform.

Hope you find this blog post useful! :)

Be Sociable, Share!

Update Samsung Galaxy Wonder I8150 to Android Jelly Bean

Revision History
26 September 2013 Updated the steps for entering download mode so that Odin can detect your phone.

I’ve always wanted to update my Samsung Galaxy Wonder to Android Jelly Bean; lately my cellphone became quite slow (especially after the latest upgrades to Android 2.3 from Samsung) and I always wanted to install Chrome and Google Keep that both require Android 4. Samsung does not provide any official release of Android 4, so I decided to go for a custom ROM. In particular, I’m using Android Jelly Bean, 4.2.2, CyanogenMod 10.1 ALPHA (Build 7).

I had quite a hard time to understand how to install it; basically because I had never installed a custom ROM. Moreover, the instructions to do that can be found on the web but I never found a complete tutorial that shows the procedure from the very start: they all assume that you have already done previous steps.

Then, I decided to write a complete tutorial (this is based on Windows). This assumes you have an external sdcard on the phone.

The update will wipe out all your data, so make sure you backup them first. Proceed at your own risk. You will also lose Samsung warranty.

PLEASE, if you have problems with this custom ROM, DO NOT post comments to this blog post: use the official discussion forum thread.

Make sure you read the whole tutorial first, before proceeding.

Disclaimer: All the tools, mods or ROMs mentioned below belong to their respective owners/developers. I am not to be held responsible if you damage or brick your device.

USB Drivers

Make sure you can connect your Android phone with the computer. If not, install the USB drivers for Samsung Galaxy W properly. The easiest way is to install Samsung Kies.

Install ClockworkMod recovery using Odin

Download Odin zip file: Mediafire (this is the original page with this information)

Extract the odin Multi_Downloader_v4.43.exe file and run it.

Download recovery-clockwork-6.0.3.4-ancora.tar.md5 and Ancora.ops (this is the original page with this information).

In Odin

  • enable “One Package” in options
  • Select the Ancora.ops file,
  • load the md5 file with the “One Package” button

Now you need to put your phone it into download mode, in order to upload the clockworkmod recovery file:

  • turn off phone,
  • hold Volume Down + Home + Power Button for a while till the phone turns on.
  • the phone will turn on and show some screen,
  • plug in usb cable
  • press Volume Up

At this point Odin should detect the connected phone

In Odin press the “Start” button, and the downloading should start (see the phone):

Wait for the download to finish (see Odin)

Now you can unplug and turn-off the phone.

Create a backup of the current image

Turn on the phone in the Recovery Mode: hold Volume Up + Home + Power Button for a while till the phone turns on (Note: this time it is “Volume Up”, not “Volume Down”). Release power button as soon as samsung logo appears and hold your volume up button + home button until clockworkmod recovery appears on the screen

To use the menus:

  • Volume buttons to move in the menu
  • Home button to select a menu
  • Power button to go back

Select “backup and restore” and then “backup to external sdcard”:

and wait for the backup to complete

You can now reboot the phone from the main screen. The phone will reboot as usual.

You may want to store a copy of the backup in your computer hard disk; just connect the phone with USB, and navigate to the directory where the backup was saved on the phone sdcard:

Perform the update

Download CyanogenMod 10.1 ALPHA (Build 7) and Google Apps

(this is the original source with the links)

Put these files in the root folder of your external SD card (you can turn on the phone as usual and connect it via USB for that, or copy these files from the computer using an external card reader).

When the copy finished, disconnect the phone and turn it off.

Switch ON the phone in the Recovery Mode: pressing and holding Volume Up + Home + Power buttons together.

Now wipe data and cache selecting the following commands (and wait for them to complete)

  • Select “wipe data/factory reset”
  • Select “wipe cache partition”
  • Select “advanced” and then select “wipe dalvik cache”

Now go back to the main menu (Recall: use the power button to go back)

Select “install zip from sdcard” and choose the zip file containing the OS (in this case it is cm-10.1-20130611-EXPERIMENTAL-ancora-alpha7.zip) from the root of the SD card (where you previously copied it).

Now do the same for the google apps zip file: select “install zip from sdcard” and choose the zip file containing the OS (in this case it is gapps-jb-20130301-signed.zip) from the root of the SD card.

Reboot into the new system

Now you’re ready to reboot into the new system from the main menu!

NOTE: Your Phone will boot now and it might take about 5 minutes to boot on your first time. So, please wait.

If everything went fine, you should see the new logo

Then you should see all the menu screens for configuring the phone!

Note: as for me, this procedure started with an error message saying that the vocal synthesis engine crashed, but I simply ignored the message and went on.

After you inserted your Google account, the phone should be ready; at this point, if you selected an Internet connection, all the applications you had previously installed from Google Play should be installed automatically… it took about an hour in my case.

First Impressions

The system seems rather stable and surely more responsive than before!

Battery usage seems to have increased, especially when connected with WIFI.

All in all, I’m very happy of the new system. :)

External Sources

These are all the links where I found the software and information I based this tutorial on.

Be Sociable, Share!

Building an Eclipse RCP Product with Buckminster

Revision History
9 May 2013 Updated listings to reflect the git repository sources. Put a tip on using a mirror aggregated with b3.

In this tutorial I’ll show how to use Buckminster to build an Eclipse RCP Product, both in the IDE and headlessly (with ant). The application I’m building is the standard Eclipse Mail RCP example with the addition of Self-Update functionalities.

We will build two products configured with update sites; the first one will rely on standard Eclipse repositories for required features, while the second one will rely only on our own repositories.

The sources of this example can be found at http://sourceforge.net/p/buckyexamples/bucky-mail-rcp/?branch=ref%2Fmaster.

Motivations

There are some nice tutorials about building Eclipse RCP Products with Buckminster (such as, e.g., Ralf Ebert‘s, Code and Me‘s, and wiki pages).

However, I found these pages out-of-date in the sense that they use Indigo or Helios for building products; with Juno things are more complicated not due to Buckminster, but to new dependencies in Juno Eclipse features and bundles (for instance, org.eclipse.rcp internally depends on org.eclipse.emf.common and org.eclipse.emf.ecore) even if you do not use the new e4 application model; see for instance the dependencies in the screenshot

This means that you will have to deal with that in the target platform definition.

Furthermore, due to the way p2 repositories are built, you will soon get the dreaded “java returned 13″ when using Buckminster to build an Eclipse product (which relies on the p2 director actually), due to the above mentioned dependencies. Even for simple products like the Eclipse RCP Mail application… you can imagine when products are bigger ;) By the way, you get similar problems even if you try to use the standard Eclipse Product export wizard.

In this post I’ll detail my experience in dealing with these problems by using Buckminster and Eclipse standard mechanisms for dealing (automatically) with dependencies; similarly, I’m not using standard Target platform definitions (which again have problems if you want to build for multiple architectures), but I’m using the nice Buckminster materialization features for materializing the target platform. The same techniques can be used with much more complex products to build them without problems due to dependencies and required software.

The tutorial is quite long since I’ll also try to provide some explanations to the problems you have when building products (in general I guess) – although the explanations are not necessary, I think they might be useful to understand things better about features, bundles, products and p2.

Materializing the Target Platform

First of all, you need to install Buckminster in your Eclipse, using this repository

http://download.eclipse.org/tools/buckminster/updates-4.2

you will need only the features shown in the screenshot

Then, we need to create a project which with all our releng functionalities; this will be a general Eclipse project, with a Buckminster Component Specification (CSPEC); this CSPEC basically declares the target platform features as dependencies. The reasons why I’m not using a standard Eclipse Target platform definition can be found in my other post, in the section “Why not the Target Editor?”; they can be summarized with the fact that, with this technique you can get a target platform for building for multiple platforms and with all the required software automatically.

We call this releng project org.eclipse.buckminster.examples.rcp.mail.releng and the buckminster.cspec looks like this (we will enrich this cspec later to perform headless builds):

We basically want a target paltform with org.eclipse.rcp (and its sources, since they are useful when developing), org.eclipse.equinox.executable to build executable applications, and org.eclipse.equinox.p2.user.ui to enable the p2 update manager in our RCP application.

Then, we define a Resource Map (RMAP) which tells Buckminster where to find these dependencies; we will use of course official Eclipse p2 site repositories for these dependencies; we store this map into a file build.rmap:

This basically tells Buckminster to take

  • all the components with name which starts with org.eclipse.buckminster.examples.rcp.mail (which will be used for all our bundles and features in this example) from the local hard disk
  • and everything else from the main Juno releases repository.

Now, we define a Component Query (CQUERY) which materializes this very component; when Buckminster resolves a component if first, transitively, resolves all its dependencies; thus resolving our releng component corresponds to materialize our target platform. The build.cquery looks like this (note the reference to the build.rmap we wrote above):

Before starting the materialization, it is better to start from a plain and empty target platform (just like you do with a standard target definition with the target editor); one nice way of doing this, as illustrated also here, is

  1. Create a new general project named TP (or some name of your preference) in the workspace
  2. In “Window” => “Preferences” => “Plug-in Development” => “Target Platform”
    Select Add…
  3. Start with an empty target definition
  4. Enter TP in the Name: field (or some name of your preference)
  5. Add a directory
  6. Click on “Variables…” scroll down and select “workspace_loc” and then type TP in the Argument: field
  7. Press “Ok” and “Finish” twice, and
  8. set this as the active platform

With the build.cquery opened in Component Query Editor, we can start the materialization by pressing “Resolve and Materialize”

This might take some time depending on your Network connection.

(TIP: you may want to aggregate a local mirror using Eclipse b3; the aggregator file for the mirror is in aggregator/target-platform-mirror.b3aggr ; you can aggregate the mirror from Eclipse, after installing b3, or headlessly using the target “b3_aggregation” of build.ant. Then, you can use the build-local.cquery you find in the releng project in the git repository; see also the README.txt file you find in the releng project).

When the materialization finishes you find the target platform in the TP project in your workspace (if you followed the instructions above).

TIP: after a materialization of the target platform, it might be better to restart Eclipse, since sometimes Buckminster tends not to catch up correctly with the new target platform.

Creating our mail projects

We now create the bundle for our RCP application, using the Eclipse wizard and the RCP Mail template; this part is standard so I will not detail it here: org.eclipse.buckminster.examples.rcp.mail.bundle contains the RCP Mail bundle created with the wizard, and appropriately modified in order to enable the same update UI functionalities used in the SDK inside our RCP app (this is illustrated in this wiki page, the modifications to ApplicationWorkbenchWindowAdvisor and ApplicationActionBarAdvisor are marked in the code with ‘XXX’ task tags). We also create org.eclipse.buckminster.examples.rcp.mail.optional.bundle which contains an optional menu (and toolbar button).

We then create the feature project org.eclipse.buckminster.examples.rcp.mail.product.feature which contains our product definition:

  1. In this project we create a new product definition, mail.product, based on features (make sure that there is no <plugins> section in your feature-based product: open mail.product with the Text editor and delete the <plugins> section if found)
  2. Give the product an ID (which must be different from the ID of the containing feature), org.eclipse.buckminster.examples.rcp.mail.product
  3. In the Product definition section choose the Product org.eclipse.buckminster.examples.rcp.mail.bundle.product from Application org.eclipse.buckminster.examples.rcp.mail.bundle.application (don’t get confused by the term ‘product’ which is overloaded in this context: it is used both for the org.eclipse.core.runtime.products extension point, and for the product configuration which will be used to create the final product :)

In the Dependencies tab, add as the only dependency the feature we are in, org.eclipse.buckminster.examples.rcp.mail.product.feature. (We also do some branding and customizations, but they’re not interesting in this context).

Now, we have to “fill” the feature.xml of org.eclipse.buckminster.examples.rcp.mail.product.feature, keeping in mind that what’s in this feature will make our final product. Thus we add:

  • org.eclipse.buckminster.examples.rcp.mail.bundle, in the plug-in section, for our Mail RCP application
  • org.eclipse.rcp, as included feature, required to build an RCP product
  • org.eclipse.equinox.p2.user.ui, as included feature, to enable the p2 update manager in our RCP application
  • Note that org.eclipse.buckminster.examples.rcp.mail.optional.bundle is NOT part of product.feature, since this is meant to represent an optional functionality that can be installed later

product.feature.plugins product.feature.features

We can create a launch configuration (Eclipse Application) by selecting our product, and then select only our feature org.eclipse.buckminster.examples.rcp.mail.product.feature and then press “Select Required”.

mail-product-launch1 mail-product-launch2

The mail application should like this (note the Preferences menu and the Update functionalities)

mail-application-running

In org.eclipse.buckminster.examples.rcp.mail.product.feature we also add a touchpoint advice file p2.inf (See the online help for more details) to configure the repositories (update sites) that should initially be present in the application:

We add the Juno release repository, the Orbit repository (not required, but just as a demonstration), and the repository on Sourceforge, where we deploy our p2 repository for our application (see the next section).

This file will be used during product build (see later).

Build a p2 Repository for our application

To build an update site (in the new terminology, “p2 repository”) for our product feature, we create a new feature project, org.eclipse.buckminster.examples.rcp.mail.product.site, and in the feature.xml we include our product feature (org.eclipse.buckminster.examples.rcp.mail.product.feature); we also create org.eclipse.buckminster.examples.rcp.mail.optional.feature (which includes org.eclipse.buckminster.examples.rcp.mail.optional.bundle) and we also include this feature in product.site; this way, once the repository is deployed, the optional functionalities can be installed in an existing mail application.

Since we’d like to have a category for our features, we add a category.xml file like the following one:

Remember: when building the site.p2 on a feature project with Buckminster, you will build a p2 repository NOT for the very feature, but for the included features.

Before creating the repository we use a properties file like the following to specify additional parameters for the repository creation:

these properties specify:

  • the output directory
  • how the .qualifier in bundles and features version is replaced (in this example we replace it with a timestamp of the latest changed resource, with some formatting)
  • by default, Buckminster will generate also source features and source bundles when creating the p2 repository, but for a product it might not make sense to make sources installable, thus we disable the generation of sources
  • we create a repository for all supported architectures and operating systems

We can now run on this project the site.p2 Buckminster action: right click on the feature project => Buckminster => Invoke Action…, select the properties file above and select site.p2.

site.p2.action

The p2 repository will be generated (if you used the above properties file) into <your home>/tmp/mail/buckminster.output/org.eclipse.buckminster.examples.rcp.mail.product.site_1.0.0-eclipse.feature/site.p2 ; you can test your newly created repository by using the Install New Software dialog in a running Eclipse, specifying the complete local path of the created site.p2

produc.site.contents

The generated repository can then deployed to a remote site (in our example we deploy it to Sourceforge where we host also this tutorial code: http://master.dl.sourceforge.net/project/buckyexamples/bucky-mail-rcp/updates (if you want to browse it, use http://sourceforge.net/projects/buckyexamples/files/bucky-mail-rcp/updates/).

Building the product

Buckminster does not provide direct means to build a product, since it relies on the standard Eclipse org.eclipse.equinox.p2.director application, you just need to set up a few more files, with pretty standard contents.

Small digression about the p2 director

I think it might be worthwhile to know a few basic things about how the Eclipse p2 director application works to understand what follows (you can also using it for doing cool things like installing features into your eclipse installations from the command line).

If you provide the director application with

  • a repository (or a list of repositories separated by commas),
  • the ID of an installable unit,
  • a profile,
  • the architecture details
  • and a destination folder,

the director will create/install (provision) in that destination folder the requested product.

For instance, try to run the following command, replacing the path of your eclipse executable (in Windows you should use the command line version, eclipsec.exe), the destination path and the architecture details for your system

and you’ll get a brand new Eclipse SDK from the Kepler site.

So what we will do to build our product is

  1. first create a p2 repository with all the features and bundles needed by our RCP application
  2. and then run the p2 director with appropriate arguments (relying on the p2 repository we created).

Setting up a project for building the product

Since we have already a feature project for building the p2 repository, org.eclipse.buckminster.examples.rcp.mail.product.site, we will use it also for building the product.

Inside this project we create a folder (build) with the following product.ant file which calls the p2.director ant task (in the following we will provide also some explanations):

For each plugin and feature projects Buckminster automatically infers a Component Specification (you can view that by right clicking on the project => Buckminster => View CSpec…) with its standard actions; we must extend this specification with additional actions to create the product, creating inside our feature project a buckminster.cspex file (note cspex, with the final x instead of c):