Felix Gertz -
Software development and consulting
Hello, my name is Felix Gertz and I am a fullstack software developer, consultant and systems integrator with more than 25 years of professional experience.
Rich Internet Applications such as complex web applications or touchscreen point-of-service terminals with interactive media are, besides normal websites, passionately developed by me.
In terms of IT, I also design extensive interfaces and backend systems and connect them from a data protection perspective.
Sustainable deployments on classic Linux server architectures make your projects cost-effective and maintainable over the long term by avoiding over-complexity. Of course I would also be happy to advise you on Kubernetes and AWS.
You can also book me for an external, objective effort assessment of your existing or planned projects.
I provide services for larger agile teams or shorter projects with tight planning.
I speak German, English and Danish, live in Hamburg, Germany and enjoy working worldwide.
Let's work together and contact me without obligation.
Rich Internet Applications such as complex web applications or touchscreen point-of-service terminals with interactive media are, besides normal websites, passionately developed by me.
In terms of IT, I also design extensive interfaces and backend systems and connect them from a data protection perspective.
Sustainable deployments on classic Linux server architectures make your projects cost-effective and maintainable over the long term by avoiding over-complexity. Of course I would also be happy to advise you on Kubernetes and AWS.
You can also book me for an external, objective effort assessment of your existing or planned projects.
I provide services for larger agile teams or shorter projects with tight planning.
I speak German, English and Danish, live in Hamburg, Germany and enjoy working worldwide.
Let's work together and contact me without obligation.
Phone: +49 40 28578495
Email: webinfo@felixgertzPUNCTUMde
LinkedIn
Xing
Focus technologies
I use these technologies and principles currently, regularly and fluidly. I have not listed obsolete technologies.
React.js (since 2015)Node.js for server applications (since 2011)
Linux server and infrastructure (since 1998)
HTML/CSS (since 1996)
Functional asynchronous JavaScript (since 2011)
TypeScript (since 2019)
PostgreSQL (since 2009)
MongoDB (since 2012)
Webserver, HA cluster (since 1998)
Networks and security (since 1998)
Container (LXC, LXD, Docker) (since 2015)
Kubernetes (since 2017)
Test Driven Development (since 2010)
Legal matters and data protection in Germany (since 2005)
Reference projects
Use the left and right arrow keys to navigate between the references.
DB InfraGO AG's Infrastructure Manager is a new version and improvement of an existing legacy system written in Smalltalk, which can no longer be maintained due to its age.
The IDBF's Infrastructure Manager can be used to view, check and consolidate track diagrams and operating points, e.g. railway stations, on the German rail network. If required, the track diagrams contain all signals, points and other TIOs (topographical infrastructure objects). By entering and sequencing station abbreviations (RIL100), entire routes and route sections can be displayed, which can be displayed in great detail and with high performance by rendering dynamic SVGs.
The web application is written in TypeScript and React.js, uses Redux Saga for application state management and a greatly expanded MUI for displaying components. Cypress.js is used to run end-to-end tests and integration tests.
The "front end" communicates with a "back end for front end" (BFF), which acts as a proxy to the other interfaces and provides real-time multi-user capability via web sockets, which is important for the interactive consolidation of the track diagrams. The BFF is realised using Node.js, TypeScript and Express.js and is hosted on a Kubernetes cluster.
The IDBF's Infrastructure Manager can be used to view, check and consolidate track diagrams and operating points, e.g. railway stations, on the German rail network. If required, the track diagrams contain all signals, points and other TIOs (topographical infrastructure objects). By entering and sequencing station abbreviations (RIL100), entire routes and route sections can be displayed, which can be displayed in great detail and with high performance by rendering dynamic SVGs.
The web application is written in TypeScript and React.js, uses Redux Saga for application state management and a greatly expanded MUI for displaying components. Cypress.js is used to run end-to-end tests and integration tests.
The "front end" communicates with a "back end for front end" (BFF), which acts as a proxy to the other interfaces and provides real-time multi-user capability via web sockets, which is important for the interactive consolidation of the track diagrams. The BFF is realised using Node.js, TypeScript and Express.js and is hosted on a Kubernetes cluster.
TypeScript React application for the customer to operate the postal station directly via the touchscreen of the station on site.
In more rural areas, the postal station replaces the old post offices or branches and all basic postal services and products, e.g. posting a registered letter, can be used automatically through the station. Furthermore, the Poststation is the sister of the Packstation, albeit developed from scratch.
The touchscreen application is written in TypeScript and React.js and uses MUI as a library for the basic components, which have been extended to meet the needs of the Poststation.
The application state management is implemented via a specially developed bidirectional architecture with the help of RxJS and accesses both the hardware interface of the mail station, which can open compartments or read the scanner, for example, and the DHL Group's services API to process the orders.
The behaviour of the user input and the DOM are tested using Jest and the React Testing Library.
Take a look for yourself:
Find the Poststation in your neighbourhood!
In more rural areas, the postal station replaces the old post offices or branches and all basic postal services and products, e.g. posting a registered letter, can be used automatically through the station. Furthermore, the Poststation is the sister of the Packstation, albeit developed from scratch.
The touchscreen application is written in TypeScript and React.js and uses MUI as a library for the basic components, which have been extended to meet the needs of the Poststation.
The application state management is implemented via a specially developed bidirectional architecture with the help of RxJS and accesses both the hardware interface of the mail station, which can open compartments or read the scanner, for example, and the DHL Group's services API to process the orders.
The behaviour of the user input and the DOM are tested using Jest and the React Testing Library.
Take a look for yourself:
Find the Poststation in your neighbourhood!
Tredict is a web app that is used by thousands of endurance athletes and trainers worldwide for manufacturer-independent training planning and post-analysis. Tredict is developed and operated by me.
Endurance athletes use Tredict to plan their activities in the training calendar, which are then automatically displayed on the athlete's sports watch for execution and at the right time via the corresponding manufacturer oAuth interfaces. After completing a training session, it can then be displayed in the Tredict training logbook for subsequent analysis.
Tredict enables the connection of athlete profiles with each other, so that the training progress can be planned or followed by a coach or acquaintance.
The frontend is a web application written in React.js and modern JavaScript and runs on both desktop and mobile phones.
On the backend side, a Service Oriented Architecture (SOA) is used, which is realised using distributed Node.js services and modern JavaScript. Individual services take on domain-specific tasks, such as processing activities, the import interface, provision of the BFF (Backend-For-Frontend), user interfaces, dynamic websites for training plans and much more.
The container system used is LXD, which is linked to an Nginx load balancer.
Tredict runs on 3 distributed dedicated servers for reliability, which also replicate the MongoDB.
The servers communicate via an encrypted virtual network.
Data storage is also encrypted at file system level.
A fourth server receives incremental backups of the database, which are managed with the help of ZFS on Ubuntu.
Tredict integrates the oAuth interfaces of Garmin, Suunto, Polar, Coros, Wahoo, Adidas, Dropbox, Paypal for Business and others, and also provides an oAuth interface for connecting other apps.
The landing page is realised via a specially written server-side static JSX rendering system and can be easily delivered via a content delivery network. The public page for training plans and coaches is delivered dynamically using Express.js.
Endurance athletes use Tredict to plan their activities in the training calendar, which are then automatically displayed on the athlete's sports watch for execution and at the right time via the corresponding manufacturer oAuth interfaces. After completing a training session, it can then be displayed in the Tredict training logbook for subsequent analysis.
Tredict enables the connection of athlete profiles with each other, so that the training progress can be planned or followed by a coach or acquaintance.
The frontend is a web application written in React.js and modern JavaScript and runs on both desktop and mobile phones.
On the backend side, a Service Oriented Architecture (SOA) is used, which is realised using distributed Node.js services and modern JavaScript. Individual services take on domain-specific tasks, such as processing activities, the import interface, provision of the BFF (Backend-For-Frontend), user interfaces, dynamic websites for training plans and much more.
The container system used is LXD, which is linked to an Nginx load balancer.
Tredict runs on 3 distributed dedicated servers for reliability, which also replicate the MongoDB.
The servers communicate via an encrypted virtual network.
Data storage is also encrypted at file system level.
A fourth server receives incremental backups of the database, which are managed with the help of ZFS on Ubuntu.
Tredict integrates the oAuth interfaces of Garmin, Suunto, Polar, Coros, Wahoo, Adidas, Dropbox, Paypal for Business and others, and also provides an oAuth interface for connecting other apps.
The landing page is realised via a specially written server-side static JSX rendering system and can be easily delivered via a content delivery network. The public page for training plans and coaches is delivered dynamically using Express.js.
Distributed Node.js microservices cluster that runs via Google Kubernetes on AWS.
This backend system is the central point of the collectAI system and is connected to external interfaces in order to automatically receive payment information from creditors and merchants on overdue payments, which is then transferred to the system's own interface using an automated ETL process. AI-supported and fully automated payment requests and even debt collection messages can then be sent to debtors in-house.
Dedicated microservices handle the connection and processing of individual external APIs, creation of invoices and PDFs, dispatch logic, provision of own APIs, data export options and feedback interfaces.
The microservices are mainly realised in Node.js and modern functional asynchronous non-blocking JavaScript and use PostgreSQL and MongoDB as database backend. The tests are executed with Mocha.
This backend system is the central point of the collectAI system and is connected to external interfaces in order to automatically receive payment information from creditors and merchants on overdue payments, which is then transferred to the system's own interface using an automated ETL process. AI-supported and fully automated payment requests and even debt collection messages can then be sent to debtors in-house.
Dedicated microservices handle the connection and processing of individual external APIs, creation of invoices and PDFs, dispatch logic, provision of own APIs, data export options and feedback interfaces.
The microservices are mainly realised in Node.js and modern functional asynchronous non-blocking JavaScript and use PostgreSQL and MongoDB as database backend. The tests are executed with Mocha.
Graphical user interface (GUI) for the DC/OS (Datacenter Operating System), which is realised as a web application using React.js and modern functional JavaScript. Redux is used to implement the unidirectional application state management of this complex React application.
DC/OS UI uses the in-house CSS framework DCOS UI KIT, which is the result of a collaboration between the designers and us developers.
The testing framework Cypress.js has been used for automated browser tests since 2016 in order to test realistic application behaviour using Jest in addition to the normal BDD tests.
With DC/OS, distributed services such as Kafka, Cassandra, Nginx, own services or Docker containers and much more can be operated in a superscale on large clusters managed and orchestrated with DC/OS. Even Google Kubernetes can be managed and monitored by DC/OS, so it is possible to run a Kubernetes cluster within a larger DCOS cluster.
DC/OS is used by a large number of major companies and even government agencies.
DCOS UI is published under an open source licence:
Mesosphere DCOS UI on Github
DC/OS UI uses the in-house CSS framework DCOS UI KIT, which is the result of a collaboration between the designers and us developers.
The testing framework Cypress.js has been used for automated browser tests since 2016 in order to test realistic application behaviour using Jest in addition to the normal BDD tests.
With DC/OS, distributed services such as Kafka, Cassandra, Nginx, own services or Docker containers and much more can be operated in a superscale on large clusters managed and orchestrated with DC/OS. Even Google Kubernetes can be managed and monitored by DC/OS, so it is possible to run a Kubernetes cluster within a larger DCOS cluster.
DC/OS is used by a large number of major companies and even government agencies.
DCOS UI is published under an open source licence:
Mesosphere DCOS UI on Github
Marathon UI is the web interface for Mesosphere's Marathon, the container and application orchestration for Apache Mesos and the DC/OS (Datacenter Operating System).
Using the Marathon UI, it is possible to manage and monitor long-running and distributed server applications, services and Docker containers using Mesosphere Marathon on a Mesos cluster in the web browser. This allows you to operate a Mesos cluster, data centre and distributed applications with Marathon UI even without knowledge of the Mesos API.
During development, attention is paid to the community, as the application is published under an Apache OpenSource licence and external reviews and code contributions take place.
Mesosphere Marathon UI on Github
The web interface is written in React.js (from React version 0.8 - year 2014) and modern JavaScript and uses Flux as a unidirectional application architecture and "state management". All automated tests are executed and processed using Mocha. The programming paradigm used is functional programming, whereby modern ECMAScript is used with the aid of Lazy.js and Underscore.js.
Marathon is or was used by, among others: bol.com, Brand24, Deutsche Telekom, DHL Parcel, Disqus, eBay, ING, Opera, Otto, OVH, PayPal, Strava, Yelp, uvm..
Mesosphere (now D2IQ) has its main office in San Francisco, USA and a second office in Hamburg, Germany.
Using the Marathon UI, it is possible to manage and monitor long-running and distributed server applications, services and Docker containers using Mesosphere Marathon on a Mesos cluster in the web browser. This allows you to operate a Mesos cluster, data centre and distributed applications with Marathon UI even without knowledge of the Mesos API.
During development, attention is paid to the community, as the application is published under an Apache OpenSource licence and external reviews and code contributions take place.
Mesosphere Marathon UI on Github
The web interface is written in React.js (from React version 0.8 - year 2014) and modern JavaScript and uses Flux as a unidirectional application architecture and "state management". All automated tests are executed and processed using Mocha. The programming paradigm used is functional programming, whereby modern ECMAScript is used with the aid of Lazy.js and Underscore.js.
Marathon is or was used by, among others: bol.com, Brand24, Deutsche Telekom, DHL Parcel, Disqus, eBay, ING, Opera, Otto, OVH, PayPal, Strava, Yelp, uvm..
Mesosphere (now D2IQ) has its main office in San Francisco, USA and a second office in Hamburg, Germany.
Lottoland.com is one of the largest online lottery game platforms in the world.
At dreamIT GmbH I was involved in the refactoring and maintenance of the JavaScript frontend logic of Lottoland.
In the course of the refactoring, the coverage with automated tests for the front end was greatly expanded with Selenium and Jasmine tests.
In the Java EE GlassFish backend, I advised on the integration of various payment providers.
At dreamIT GmbH I was involved in the refactoring and maintenance of the JavaScript frontend logic of Lottoland.
In the course of the refactoring, the coverage with automated tests for the front end was greatly expanded with Selenium and Jasmine tests.
In the Java EE GlassFish backend, I advised on the integration of various payment providers.
Website for EuroEyes Deutschland GmbH that can be maintained using the "Drupal 7" CMS.
The layout is completely responsive and adapts to the screen width of the displaying device. This means that the site is easy to view on a mobile phone as well as on a desktop computer. The content creator does not need to worry about the correct presentation of the page.
The page structure, content, categories and special pages, such as the FAQ, can be maintained via the "Drupal 7" backend using a WYSWIG editor. Drupal runs via PHP-FPM and is able to deliver the pages with high performance and in large quantities thanks to caching.
The layout is completely responsive and adapts to the screen width of the displaying device. This means that the site is easy to view on a mobile phone as well as on a desktop computer. The content creator does not need to worry about the correct presentation of the page.
The page structure, content, categories and special pages, such as the FAQ, can be maintained via the "Drupal 7" backend using a WYSWIG editor. Drupal runs via PHP-FPM and is able to deliver the pages with high performance and in large quantities thanks to caching.
In this Facebook JavaScript application, the player can win a tea test pack by answering questions relating to the country tea collection.
Data is exchanged between the frontend and backend via a REST JSON API implemented on the server side with Express.js.
Only pure user data is exchanged, the HTML templating is carried out in the Backbone.js frontend application itself and is transferred statically when the application is started.
A stateless, scalable Node.js cluster with persistent connections to a MongoDB handles the dynamic data processing.
By separating static and dynamic requests to the server, limiting them to the pure transfer of dynamic user data at runtime and using modern technologies such as Node.js, MongoDB, Redis for session handling and Nginx, it was possible to achieve high performance and scalability with low resources.
An opt-in procedure was implemented with the Postfix mail server for legally correct confirmation of the newsletter subscription.
Data is exchanged between the frontend and backend via a REST JSON API implemented on the server side with Express.js.
Only pure user data is exchanged, the HTML templating is carried out in the Backbone.js frontend application itself and is transferred statically when the application is started.
A stateless, scalable Node.js cluster with persistent connections to a MongoDB handles the dynamic data processing.
By separating static and dynamic requests to the server, limiting them to the pure transfer of dynamic user data at runtime and using modern technologies such as Node.js, MongoDB, Redis for session handling and Nginx, it was possible to achieve high performance and scalability with low resources.
An opt-in procedure was implemented with the Postfix mail server for legally correct confirmation of the newsletter subscription.
In this Facebook JavaScript app, players can invite friends via a personalised link to take a seat at the V.I.P. table they have created themselves.
If you manage to fill the table completely, you enter the prize draw pool for bottles full of vodka. The fastest 5 tables per week receive a special prize.
The status of the playing field and the winners are regularly updated via a pull request from the front end. This means that the application is always up to date without the user having to do anything.
Data is exchanged between the frontend and backend via a REST JSON API implemented on the server side with Express.js.
Only pure user data is exchanged, the HTML templating is carried out in the Backbone.js frontend application itself and is transferred statically when the application is started.
A stateless, scalable Node.js cluster with persistent connections to a MongoDB handles the dynamic data processing.
By separating static and dynamic requests to the server, limiting them to the pure transfer of dynamic user data at runtime and using modern technologies such as Node.js, MongoDB and Nginx, high performance and scalability can be achieved with low resources.
If you manage to fill the table completely, you enter the prize draw pool for bottles full of vodka. The fastest 5 tables per week receive a special prize.
The status of the playing field and the winners are regularly updated via a pull request from the front end. This means that the application is always up to date without the user having to do anything.
Data is exchanged between the frontend and backend via a REST JSON API implemented on the server side with Express.js.
Only pure user data is exchanged, the HTML templating is carried out in the Backbone.js frontend application itself and is transferred statically when the application is started.
A stateless, scalable Node.js cluster with persistent connections to a MongoDB handles the dynamic data processing.
By separating static and dynamic requests to the server, limiting them to the pure transfer of dynamic user data at runtime and using modern technologies such as Node.js, MongoDB and Nginx, high performance and scalability can be achieved with low resources.
This Facebook app was realised as a JavaScript front-end application to cover all modern target platforms, such as browsers and tablets. Backbone.js was used as a modern structure framework in conjunction with Require.js for modularisation.
This combination makes it possible to maintain an overview and the ability to collaborate even in larger JavaScript projects.
Users can select a photo from the galleries of their Facebook friends, position it and merge Deichkind's tetrahedron with the photo. The result can then be distributed further on Facebook or downloaded.
In order to be able to select the photo, a separate gallery view was programmed via the "Open Graph" API.
On the server side, the individual graphics are merged using libGD.
This combination makes it possible to maintain an overview and the ability to collaborate even in larger JavaScript projects.
Users can select a photo from the galleries of their Facebook friends, position it and merge Deichkind's tetrahedron with the photo. The result can then be distributed further on Facebook or downloaded.
In order to be able to select the photo, a separate gallery view was programmed via the "Open Graph" API.
On the server side, the individual graphics are merged using libGD.
In this Facebook JavaScript application, the player can win one of 2000 tea packets with an initial chance of 36%.
The playing field consists of 40,000 fruits, and you have 4 attempts to turn over a fruit.
If you win, your own profile picture appears.
Further attempts can be generated by inviting friends via a personalised link.
At the peak of the campaign, the server answered more than 1000 requests/second without any visible signs of major utilisation.
In just a few hours, more than 6,000,000 requests were made to the server and the playing field was completely cleared.
The fruit field can be moved using drag & drop, with new field segments being loaded on-the-fly by the server.
The status of the playing field and the winnings are regularly updated via a pull request from the front end. This means that the application is always up to date without the user having to do anything.
Data is exchanged between the frontend and backend via a REST JSON API that was implemented on the server side with Express.js.
Only pure user data is exchanged, the HTML templating is carried out in the Backbone.js frontend application itself and is transferred statically when the application is started.
A stateless, scalable Node.js cluster with persistent connections to a MongoDB handles the dynamic data processing.
By separating static and dynamic requests to the server, limiting them to the pure transfer of dynamic user data at runtime and using modern technologies such as Node.js, MongoDB, Redis for session handling and Nginx, high performance and scalability can be achieved with low resources.
The playing field consists of 40,000 fruits, and you have 4 attempts to turn over a fruit.
If you win, your own profile picture appears.
Further attempts can be generated by inviting friends via a personalised link.
At the peak of the campaign, the server answered more than 1000 requests/second without any visible signs of major utilisation.
In just a few hours, more than 6,000,000 requests were made to the server and the playing field was completely cleared.
The fruit field can be moved using drag & drop, with new field segments being loaded on-the-fly by the server.
The status of the playing field and the winnings are regularly updated via a pull request from the front end. This means that the application is always up to date without the user having to do anything.
Data is exchanged between the frontend and backend via a REST JSON API that was implemented on the server side with Express.js.
Only pure user data is exchanged, the HTML templating is carried out in the Backbone.js frontend application itself and is transferred statically when the application is started.
A stateless, scalable Node.js cluster with persistent connections to a MongoDB handles the dynamic data processing.
By separating static and dynamic requests to the server, limiting them to the pure transfer of dynamic user data at runtime and using modern technologies such as Node.js, MongoDB, Redis for session handling and Nginx, high performance and scalability can be achieved with low resources.
In the European Parliament's visitor centre in Brussels, the Parlamentarium, stands this approx. 4m long table. The table represents the 52 weeks of the Parliament.
By sliding the mounted monitor, which can be moved along the x-axis, over the calendar, you can view the description of a working week that is currently displayed at the current position of the monitor.
The content data is loaded from the web service of the specially developed CMS server via a SOAP interface.
The application is connected to the position measuring device from WOT via a TCP socket. A high-level protocol was designed and defined for the slider's position data so that it can be easily processed in the AIR application.
There is a persistent connection to an XML socket to automatically switch to the visitor's language, the visitor wears an RFID chip and can trigger an RFID reader. This socket is provided by an RFID server from NOUS and the high-level protocol is specified by me.
By sliding the mounted monitor, which can be moved along the x-axis, over the calendar, you can view the description of a working week that is currently displayed at the current position of the monitor.
The content data is loaded from the web service of the specially developed CMS server via a SOAP interface.
The application is connected to the position measuring device from WOT via a TCP socket. A high-level protocol was designed and defined for the slider's position data so that it can be easily processed in the AIR application.
There is a persistent connection to an XML socket to automatically switch to the visitor's language, the visitor wears an RFID chip and can trigger an RFID reader. This socket is provided by an RFID server from NOUS and the high-level protocol is specified by me.
Visitors to the Parlamentarium, the European Parliament's visitor centre, can enter their wish for the future at three touchscreen terminals.
This wish is projected onto the first of the three walls in front of it and displayed together with other wishes.
Older wishes are delegated to the next wall further back.
All 6 AIR applications in this terminal/display network exchange their data via an XML pass-through socket and can therefore communicate easily.
Visually impaired people can also operate this station in blind mode.
For this purpose, the Future Wish Terminal has been integrated with the JAWS screen reader software so that the text is read aloud and displayed on a Braille display. In addition, the visual display in this mode is very high-contrast and has larger text.
At the time of development, it was not yet possible to play surround sound with the AIR platform.
To remedy this, the mplayer was used to play surround sound via a NativeProcessCall.
You could say it's an audiovisual experience when a wish flies onto the Future Wish Wall.
The content data is loaded from the web service of the specially developed CMS server via a SOAP interface.
The visitor wears an RFID chip and can trigger an RFID reader to automatically display the content in their own language.
For this purpose, there is a persistent connection to an XML push socket.
The high-level protocol for data exchange between the RFID server and the AIR application was specified by me. The RFID server is provided by the company NOUS.
This wish is projected onto the first of the three walls in front of it and displayed together with other wishes.
Older wishes are delegated to the next wall further back.
All 6 AIR applications in this terminal/display network exchange their data via an XML pass-through socket and can therefore communicate easily.
Visually impaired people can also operate this station in blind mode.
For this purpose, the Future Wish Terminal has been integrated with the JAWS screen reader software so that the text is read aloud and displayed on a Braille display. In addition, the visual display in this mode is very high-contrast and has larger text.
At the time of development, it was not yet possible to play surround sound with the AIR platform.
To remedy this, the mplayer was used to play surround sound via a NativeProcessCall.
You could say it's an audiovisual experience when a wish flies onto the Future Wish Wall.
The content data is loaded from the web service of the specially developed CMS server via a SOAP interface.
The visitor wears an RFID chip and can trigger an RFID reader to automatically display the content in their own language.
For this purpose, there is a persistent connection to an XML push socket.
The high-level protocol for data exchange between the RFID server and the AIR application was specified by me. The RFID server is provided by the company NOUS.
This touchscreen application was specially developed for the Intersolar 2011 trade fair in Munich.
The content is easy to maintain via an external folder structure.
The application ran on 3 interactive stations and was set up and customised by me on site.
The content is easy to maintain via an external folder structure.
The application ran on 3 interactive stations and was set up and customised by me on site.
Provision of an AMF data service and installation of a ‘Drupal 6’ CMS backend, which can be used to maintain the entire content of the Flash frontend.
The installation was carried out on a server with high throughput, as the website generated 10-40MB of data traffic per visit. I provided the technical advice here.
Winner of the "FWA - Site of the Day" on 15 August 2010.
The installation was carried out on a server with high throughput, as the website generated 10-40MB of data traffic per visit. I provided the technical advice here.
Winner of the "FWA - Site of the Day" on 15 August 2010.
"Multiplatform" and "Responsive" is the requirement for this website. It can be optimally displayed on a normal desktop computer as well as on a smartphone or tablet computer and has been specially optimised for all major display platforms. The existing multilingual content, including the blog, can be easily managed via the Drupal CMS backend.
Winner of the "CSS Website Award" on 31 July 2010.
Winner of the "CSS Website Award" on 31 July 2010.
The "Single Page Website" display principle allows all content to be displayed on a single page and offers the advantage of smooth navigation between content.
JavaScript improves the feel of the site in a meaningful way, but does not prevent it from being usable if JavaScript is not available.
A "Drupal 6" backend makes it possible to maintain the content.
JavaScript improves the feel of the site in a meaningful way, but does not prevent it from being usable if JavaScript is not available.
A "Drupal 6" backend makes it possible to maintain the content.
A screensaver programmed for Adobe AIR that has been integrated into the SiteKiosk kiosk system.
This font orgasm can be viewed in the European Parliament Visitors Centre in Brussels/Belgium, at the Internet Terminal Stations.
You can also have a very cosy coffee there.
The pixel-perfect alignment of the font to a line and other visual treats was painstakingly done by hand by Vincent Stoltzenberg.
This font orgasm can be viewed in the European Parliament Visitors Centre in Brussels/Belgium, at the Internet Terminal Stations.
You can also have a very cosy coffee there.
The pixel-perfect alignment of the font to a line and other visual treats was painstakingly done by hand by Vincent Stoltzenberg.
In the visitor centre of the European Parliament in Brussels, the Parlamentarium, 5 variants of this passive-interactive application are projected onto entrance areas using a projector. The visitor is equipped with an RFID chip and as soon as the visitor enters the radius of the application's active RFID reader, their language is highlighted in the animation.
The distribution calculation of the individual text blocks was realised in the universal language HaXe in order to be able to better provide the calculations with automatic tests. The AS3 code was then generated from the target platform-independent HaXe and integrated into the application.
The connection to the RFID server provided by NOUS is established via a persistent XML socket, which automatically sends the data to the application when it is active.
The content data is loaded from the web service of the specially developed CMS server via a SOAP interface.
The distribution calculation of the individual text blocks was realised in the universal language HaXe in order to be able to better provide the calculations with automatic tests. The AS3 code was then generated from the target platform-independent HaXe and integrated into the application.
The connection to the RFID server provided by NOUS is established via a persistent XML socket, which automatically sends the data to the application when it is active.
The content data is loaded from the web service of the specially developed CMS server via a SOAP interface.
Seven of these "Full HD 32-inch" touchscreen tables are located in the European Parliament's visitor centre in Brussels, the Parlamentarium.
From a bird's eye view, visitors can look into virtual file folders and view the contents on moving maps.
This is programmed using the native 3D methods of the Apache Flex framework and runs very efficiently for a "full HD" AIR application.
Flash content that can be loaded at runtime simplifies the workflow when creating the content, as interactive content can be provided by a designer directly from the Flash IDE, without affecting the application.
The content data is loaded from the web service of the specially developed CMS server via a SOAP interface.
The visitor wears an RFID chip and can trigger an RFID reader to automatically display the content in their own language.
For this purpose, there is a persistent connection to an XML push socket.
The high-level protocol for data exchange between the RFID server and the AIR application was specified by me. The RFID server is provided by the company NOUS.
From a bird's eye view, visitors can look into virtual file folders and view the contents on moving maps.
This is programmed using the native 3D methods of the Apache Flex framework and runs very efficiently for a "full HD" AIR application.
Flash content that can be loaded at runtime simplifies the workflow when creating the content, as interactive content can be provided by a designer directly from the Flash IDE, without affecting the application.
The content data is loaded from the web service of the specially developed CMS server via a SOAP interface.
The visitor wears an RFID chip and can trigger an RFID reader to automatically display the content in their own language.
For this purpose, there is a persistent connection to an XML push socket.
The high-level protocol for data exchange between the RFID server and the AIR application was specified by me. The RFID server is provided by the company NOUS.
Print out the steering wheel and steer your Skoda Fabia RS in the specially developed game engine using augmented reality and guide it to the finish line.
Webcam capturing, recognition of the AR marker with the FLARToolKit, physical calculations of the game engine and level rendering undoubtedly push the Flash player to the limits of its capabilities, but also show what is possible with an RIA.
The previous prototype made it possible to test technologies and ensure feasibility.
The game became "Site of the Day" at FWA in September 2010, which posed a small challenge to the server infrastructure.
Promotional video on YouTube
Webcam capturing, recognition of the AR marker with the FLARToolKit, physical calculations of the game engine and level rendering undoubtedly push the Flash player to the limits of its capabilities, but also show what is possible with an RIA.
The previous prototype made it possible to test technologies and ensure feasibility.
The game became "Site of the Day" at FWA in September 2010, which posed a small challenge to the server infrastructure.
Promotional video on YouTube
The skull generator was developed for the 100th birthday of FC St.Pauli in Hamburg.
Using a webcam or photo upload, you can transform your own head into the "St. Pauli" skull logo and download it as an avatar or wallpaper or distribute it via social networks.
To demonstrate the feasibility of the skull process, an internal prototype was developed as a first step, in which various filter techniques were tested, resulting in the final algorithm.
The entire skull process runs on the user side, so that no server-side resources need to be provided for it.
To ensure fast delivery and scalability of the data, Apache CouchDB was selected as the database management system, in which all data, including the skull images themselves, are stored. The Flash frontend thus receives the queries directly from the database, without another slowing server-side instance in between, as is normally necessary to process database responses.
This is where the "Flash to CouchDB" constellation has done some pioneering work.
The high-performance Apache Lucene is used for the full-text search in the skull gallery.
The skull generator received an award at the ADC 2011.
Using a webcam or photo upload, you can transform your own head into the "St. Pauli" skull logo and download it as an avatar or wallpaper or distribute it via social networks.
To demonstrate the feasibility of the skull process, an internal prototype was developed as a first step, in which various filter techniques were tested, resulting in the final algorithm.
The entire skull process runs on the user side, so that no server-side resources need to be provided for it.
To ensure fast delivery and scalability of the data, Apache CouchDB was selected as the database management system, in which all data, including the skull images themselves, are stored. The Flash frontend thus receives the queries directly from the database, without another slowing server-side instance in between, as is normally necessary to process database responses.
This is where the "Flash to CouchDB" constellation has done some pioneering work.
The high-performance Apache Lucene is used for the full-text search in the skull gallery.
The skull generator received an award at the ADC 2011.
When 1&1 Internet AG took over the 700,000 DSL customers of Freenet AG, the task was to develop a personalised online film containing the customer's personal name. The challenge was to place the customer's name live in the video, using widespread technologies on the user side, achieving scalability and ensuring the protection of customer data.
A 3D framework was used to display the name, which can also run on older "Flash Player 9" versions, although this meant additional work.
Using a UUID in the video link of the newsletter sent, the name was stored in a session server from the internal customer database. The application was able to retrieve and process the customer name from this session server at short notice.
In order to cope with the immense traffic volume of ~700,000 videos viewed, 8 servers, each with a 1 Gbit connection, were provided for delivery.
I was the "post supervisor" for this project during the film shoot, including at the German Climate Computing Centre, on an IBM Power 6 p575 "Blizzard" supercomputer, in order to keep subsequent 3D mapping feasible for the scene settings.
A 3D framework was used to display the name, which can also run on older "Flash Player 9" versions, although this meant additional work.
Using a UUID in the video link of the newsletter sent, the name was stored in a session server from the internal customer database. The application was able to retrieve and process the customer name from this session server at short notice.
In order to cope with the immense traffic volume of ~700,000 videos viewed, 8 servers, each with a 1 Gbit connection, were provided for delivery.
I was the "post supervisor" for this project during the film shoot, including at the German Climate Computing Centre, on an IBM Power 6 p575 "Blizzard" supercomputer, in order to keep subsequent 3D mapping feasible for the scene settings.
Specially developed system for data management, data storage, data distribution and monitoring of most of the stations in the Parlamentarium in Brussels, the visitor centre of the European Parliament.
Most of the interactive stations in the Parlamentarium obtain and store their data via the SOAP web service of this system.
Furthermore, the station content, such as texts in 23 languages, videos, subtitles in 23 languages, images, flash content, remote content and station behaviour rules are managed via a CMS web interface.
With a station overview, staff have control over the status of the stations in the visitor centre and can react if necessary in the event of a station failure.
The redundantly mirrored server delivers a database of 40 GB of video streaming data on-demand.
Tests have shown a data throughput of 480 MBit/s at the maximum demand of the stations, which could be handled without any problems.
Remote data from the MEP's CODICT web service is synchronised automatically on a regular basis.
Uploaded videos can be automatically converted to the target format using ffmpeg.
A constantly growing system with ever new requirements resulted in a development period of more than 2 years.
For this reason, special attention had to be paid to the architecture of the software and the programme code in order to maintain maintainability and expandability.
Most of the interactive stations in the Parlamentarium obtain and store their data via the SOAP web service of this system.
Furthermore, the station content, such as texts in 23 languages, videos, subtitles in 23 languages, images, flash content, remote content and station behaviour rules are managed via a CMS web interface.
With a station overview, staff have control over the status of the stations in the visitor centre and can react if necessary in the event of a station failure.
The redundantly mirrored server delivers a database of 40 GB of video streaming data on-demand.
Tests have shown a data throughput of 480 MBit/s at the maximum demand of the stations, which could be handled without any problems.
Remote data from the MEP's CODICT web service is synchronised automatically on a regular basis.
Uploaded videos can be automatically converted to the target format using ffmpeg.
A constantly growing system with ever new requirements resulted in a development period of more than 2 years.
For this reason, special attention had to be paid to the architecture of the software and the programme code in order to maintain maintainability and expandability.
This Rich Internet Application was developed for the Vaillant Extranet as an information portal for dealers and employees.
The dynamic content, maintainable via WYSIWYG-CMS, was delivered as XML to the Flash frontend.
Interactive animations, especially of the brand model, made the application tangible.
The dynamic content, maintainable via WYSIWYG-CMS, was delivered as XML to the Flash frontend.
Interactive animations, especially of the brand model, made the application tangible.
Flash page that obtains its content structure from a server-side XML dynamically generated with PHP and can thus flexibly load and display the content, the menu, galleries and news. Where dynamics are required, the content is displayed programmatically, e.g. breaks are determined automatically.
P2P Next is an open source research project funded by the "Seventh Framework Programme" of the European Union, consisting of a consortium of 21 companies and institutions from 12 European countries, which will be funded for 4 years until 2012.
The focus is on the development of a P2P video streaming service called "NextShare" in order to be able to set up a streaming infrastructure in a simple way, which also extends to TV sets in living rooms or supports Wikimedia in the distribution of media content. The P2P protocol was based on a further development of BitTorrent, the Tribler protocol.
My task was to develop and co-conceptualise a distribution and tracking system for editorial content distributed via this network.
I attended the quarterly consortium meeting in London at the BBC, in Geneva at Eurovision, in Delft at TU Delft and in Inari (Lapland) at VTT to present and discuss the progress of the project.
At the first EU review in Brussels, one year into the project, I was able to convince the three reviewers with my presentation of the "AdMediaCenters" programme. With this programme, video content could be provided with meta information for additional content such as advertising messages and then played and tracked in the network.
The focus is on the development of a P2P video streaming service called "NextShare" in order to be able to set up a streaming infrastructure in a simple way, which also extends to TV sets in living rooms or supports Wikimedia in the distribution of media content. The P2P protocol was based on a further development of BitTorrent, the Tribler protocol.
My task was to develop and co-conceptualise a distribution and tracking system for editorial content distributed via this network.
I attended the quarterly consortium meeting in London at the BBC, in Geneva at Eurovision, in Delft at TU Delft and in Inari (Lapland) at VTT to present and discuss the progress of the project.
At the first EU review in Brussels, one year into the project, I was able to convince the three reviewers with my presentation of the "AdMediaCenters" programme. With this programme, video content could be provided with meta information for additional content such as advertising messages and then played and tracked in the network.
Employees can enter video projects and directors via a specially developed WYSIWYG content management system.
The associated newsletter system is linked to the projects and can be operated with just a few clicks.
The in-house development of the components makes the system lean and performant and is not hindered by the size of the project.
A large part of the HTML layout was made available to me as source code and did not need to be described.
The associated newsletter system is linked to the projects and can be operated with just a few clicks.
The in-house development of the components makes the system lean and performant and is not hindered by the size of the project.
A large part of the HTML layout was made available to me as source code and did not need to be described.
Video distribution system and portal with "WYSIWIG Content Management System" in-house development for Mhoch4 GmbH.
The backend is used daily to manage a large number of videos in various formats by different employees and downloaded by the end user via the frontend. The system offers user management with different user types, usage statistics, multilingualism with direct translation of all page elements, changeability of all elements, content management with categorisation and archiving, and much more.
Thanks to the in-house development of the CMS, good performance can be achieved as the system is directly customised to the needs of the application.
Using "mod_perl", the interpreted Perl code is stored in the Apache web server memory and thus also contributes to a significant improvement in performance when the page is delivered.
The backend is used daily to manage a large number of videos in various formats by different employees and downloaded by the end user via the frontend. The system offers user management with different user types, usage statistics, multilingualism with direct translation of all page elements, changeability of all elements, content management with categorisation and archiving, and much more.
Thanks to the in-house development of the CMS, good performance can be achieved as the system is directly customised to the needs of the application.
Using "mod_perl", the interpreted Perl code is stored in the Apache web server memory and thus also contributes to a significant improvement in performance when the page is delivered.