Disclaimer

De meningen ge-uit door medewerkers en studenten van de TU Delft en de commentaren die zijn gegeven reflecteren niet perse de mening(en) van de TU Delft. De TU Delft is dan ook niet verantwoordelijk voor de inhoud van hetgeen op de TU Delft weblogs zichtbaar is. Wel vindt de TU Delft het belangrijk - en ook waarde toevoegend - dat medewerkers en studenten op deze, door de TU Delft gefaciliteerde, omgeving hun mening kunnen geven.

Weblog of Oscar Castañeda @TU Delft

Presenting your master’s thesis at a conference

Recently I presented my thesis at ApacheCon North America in Atlanta. I wrote about my experience in a blog post for Google’s Open Source Blog: 

http://google-opensource.blogspot.com/2010/11/life-after-google-summer-of-code.html

If you’re interested in presenting your thesis at a conference, i’d highly recommend going for it. It truly is a great experience! There are various opportunities for students. My advise is to become a member of a professional or academic organization. And then look for a conference that is organized every year. Probably you will need to apply at least 6 months or one year in advance. So preparation is key.

In my field, professional and academic organizations include USENIX, ACM, IEEE, among others. But there are also more specific organizations. Furthermore, opportunities for students often include financing. In my case, the airfare and hotel were covered. Plus speaking in public at a conference is great practice for your thesis defense! It has been for me, although I haven’t presented yet. But already I feel more confident knowing I presented in front of 30 people and things went ok.

So if you’re thinking of presenting your thesis at a conference, my advise is go for it! Look online, opportunities are plentiful for those who search for them.

Best of luck! 

Suggestions for attending the EURO conference

This week I attended the 24th European Conference on Operational Research. Still being in Lisbon, where the EURO conference was hosted this year, I have some thoughts, mostly advice for students, on attending such a conference. 

First of all, the conference is huge (close to 3,000 attendees this year). This year there were about 40 parallel streams and sessions that varied greatly in terms of depth into the different subject matters. It is easy to get lost in this conference. So reading the programme beforehand (ie. as soon as it’s posted online) is highly recommended. Furthermore, it is advisable to have a clear idea of your area(s) of interest. For instance choosing one or two topics and attending related talks is good practice.

Second, attending keynotes and plenary sessions is a must. The speakers thoroughly prepare for these sessions and are superstars on the subject matter. This year the plenary sessions featured Professors John Nash Jr. and Harold Kuhn. In fact, I would say in your first visit to the EURO conference, try and stick with the keynotes and plenary sessions. Attend streams but do not get discouraged if you get lost or cannot get to grips with the subject matter. As I said, depth varies significantly in this conference and you have to be aware of this beforehand.

Another tip is to leave time for meeting conference attendees. You will likely be tired after a couple of sessions and meeting someone to chat with for a few minutes is always refreshing. Sometimes it can take courage to walk up to someone, for instance a well-known speaker, but keep in mind that people are usually friendly and will be happy to meet an interested student to talk with for a few minutes. And if you have questions about a session, it is advisable to contact the speaker right after the session for questions offline. Otherwise you might not see him or her again, since the conference only lasts three days.

You might also want to consider attending during the first year of your master’s or final year of your bachelor’s. Perhaps you can gather courage to present your thesis at the conference next time around, for example the year after you first attend. The key is that to present at this conference you do not need to submit a scientific publication, but instead only an abstract of your presentation. This does not mean you will easily get in, but it does mean you have a more accessible hurdle to jump in order to get the chance to present your research at a high caliber conference. In my opinion: definitely worth it! That’s why I will be giving it a shot next year.

And finally, my last piece of advise for attending the EURO, or any other conference for that matter, is to do some well-deserved tourism after each day of the conference and also to make sure and have nice walks and delicious dinners! 

As an end note, the EURO skips a year every three years. So the next conference will be until 2012, it will be hosted in Lithuania. However, there will be an Operations Research conference in Zurich next year. So hopefully see you at OR2011 or the next EURO! 

 

 

Copycat mime or Mousedroid?

Stories of inventors and innovators are usually celebrated and fun to come by. I personally enjoy reading about innovators. I find it thrilling to get a glimpse into their lives and personalities and how that relates to the real story behind their successful innovations. I have noticed, however, that the key word is always successful. Seldom do we read about would-be innovators, whose projects for one reason or another never pulled through, or worse, whose intentions and ideas remained in the world of imagination. Don’t get me wrong, that’s a nice world to live in, especially for a student, but we all know the benefits of bringing great ideas into the real world, whatever that may be. For ease of reference let’s call it "the market."

Well, I have one of those stories for you here. This blog entry is about an idea I came up with about a year ago. I thought it was a pretty unique idea until a friend pointed me to a web page that showed a similar innovation, already implemented and marketed! I felt a sudden excitement seeing that my idea had a real world relative, but at the same time I felt disappointment and regret for not having spent more time working on the idea. I could give several excuses for why I did not do it, but instead i’ll be honest here, I just didn’t believe strongly enough in my own idea. I’ll get back to this, but first let me tell you about my idea. I suspect it might still have some promise, and I’m currently thinking of how to change it and make it better. For this I need your help. Here’s the plan: I will first go through my idea, and then through the strong points of its potential successor, to close I will ask for your opinion, to see whether or not my idea still has any hope. 

So let’s get to it. My idea is about an application for mobile phones. Mobile phones, like some of the one’s running Google Android, include capabilities for location-based services. Recent changes in the Android SDK, available since version 0.9, added a new sensor service to control enabling/disabling the compass and accelerometer functions in supported mobile phones. Combining compass functions together with the location-based service APIs available in Android, makes new and interesting applications possible. What does this mean? It means that mobile phones can now use GPS functions to detect where in the world the phone is located, and compass and acceleration detection functions to get information about movement and direction. This opens the door for interesting applications.  

Combining location with user interaction, enables a mobile phone user to, for instance, “click” on a building to obtain information about it. But how? Well, when a user “clicks” on a building a “virtual bullet” would be fired and tracked (through software) in it’s path towards the target. Each step in this path would involve a look-up to a mapping service that answers the question: Is this a building? The process of tracking and look-up’s continues until the “virtual bullet” hits a building. When it does, the answer would be: This is a building! Then information from the mapping service would be sent back to the mobile phone, for instance in the form of a webpage. New and interesting uses could be given to such an application. For instance, user’s could click on billboards, buildings, objects or even people (granted all security and privacy concerns have been addressed). This application would be a bridge between the physical and virtual worlds, allowing users to literally surf the world (wide web). The image below shows a sketch I drew when explaining the idea to some friends over dinner at Aula.

 

mousedroid 

The application I mentioned previously is already in the market, and was a finalist in the Android Developer Challenge. It is called Wikitude (see below for a 1 minute demo). It is an application for Android that uses location-based information from Wikipedia content to display real-time information about its surroundings. It does this by displaying the information on the mobile phone’s screen about the surroundings of a user who points the phone to a specific location, as if taking a photo or video. One thing I find impressive is that distance doesn’t seem to matter for this application, as the demo shown below shows, points of interest are annotated regardless of how far they are from the user. Another feature I like is how the application presents information about several sites simultaneously and in real-time, without the need for user interaction with the application. At the same time, however, this is a limitation of Wikitude. Despite the ease of use that comes from limited interaction, there are many disadvantages to leaving the user out completely. In my opinion, more relevant information, beyond an informative banner, could be presented to the user, and more importantly, could be the trigger to user interaction.

Much like we interact with the world around us, as opposed to only looking at it, an application for reality augmentation, such as the ones discussed before, should enable users to interact with both virtual and physical worlds. For instance, and this is something I just thought of, second life could be integrated to our real life. Meshing information available on the Internet with things that are out there, such as objects, people, buildings, cars, and so on, would enable a substantial augmentation of reality. Moreover, such integration could be done at different levels of reality and detail, enabling people to create their own version of reality through interaction with their personally created world(s). Comparing Wikitude to Mousedroid (the name I gave to my idea), I see differences that can guide further development and use. As I mentioned previously, I think a powerful feature would be the ability to interact with the user’s surrounding environment. Wikitude does not, as far as I know, offer this feature. Because the information is only presented to the user, without the need for interaction, the potential for manipulating the environment is limited. This is an important difference that could mean success for Mousedroid. 

So, what do you think? Does Mousedroid stand a chance? Perhaps some of this sounds like day dreaming, and partly it is, as it’s mostly brainstorming at this point. But I enjoy this part of coming up with something new, be it what it may, anything from an essay to an invention. Most importantly, the lesson I learned from this experience is to have more confidence in my own ideas, keeping in mind that successful innovations all started out once as "just an idea."

 YouTube Preview Image

A code summer of cold

GSoC 

This summer I participated in Google Summer of Code, mostly out of my office for the summer: the cold library of TU Delft. During the weekdays I worked on my project (Incubating an Android in Delft) in the library, and because it was empty, well, it was a bit colder than usual. Obvious you will say, but I truly didn’t expect having to wear a heavy sweater during the summer, and much less to be in the library! But it turned out to be an outstandingly interesting experience from which I learned about a new technology (SCA), learned to do things in the open-source way, and was able to explore possible topics for my thesis, in addition to studying for a decision-making re-exam!

My GSoC project was about getting Apache Tuscany to run on Android, Google’s new mobile phone platform. Because of the lack of annotations support in the previous SDK (which I was using back then) I couldn’t actually get it to run. However, I was able to produce a reduced set of modules for a lightweight Tuscany runtime – one of the main goals of my project proposal. Recently, Google released an updated version of the SDK. I’m continuing the work I started in GSoC to finally see Tuscany running on Android! 

Something nice about GSoC was the freedom to work when and where I wanted to. Often this meant working at night and enjoying the day, or the other way around, or on weekends, or weekdays, or just a bit here and there, or well, what have you. In the end, I found that it was actually a lot of work, but also interesting work, of a kind I hadn’t actually done before.  Initially it was a big challenge to even manage to get out of bed and start working. Then I found out that setting small goals helped to get some inertia, and eventually helped me get into a flexible regime in which I was, in the end, working several hours per day without actually feeling that I was working. At times, the project became more like a hobby and less like a job.

I started off by proposing a project, setting deadlines, milestones, deliverables; what you would expect if you’ve ever taken a project management course or, well, managed a project. Well into the project I found that I wasn’t going to be able to meet some of the deadlines I had proposed. Something, perhaps, usual in software development projects. But also important in open-source collaboration, I found, is the process. Many times, it is actually more about ‘process’, and less about ‘project’. More specifically, because open-source projects involve mostly volunteers, deadlines are not that important. There is no hierarchy in open-source projects, in the sense that there is no command-and-control, no single individual or group has the power to make people do things in a certain order or specific way, following for instance a rigid set of rules. Instead, doing the right things is what matters. It was actually kind of cool that I was reading about this stuff, because I was studying for a decision-making re-exam, and being able to draw a parallel to the actual work I was doing with GSoC. So, to make it short, I followed the process orientation (like that of decision-making in networks), and presented my findings and progress using a project management approach, while actually working behind the scenes, in a way that more resembled that of process management. In short, I found management in open-source projects to be a fascinating topic which will likely be the research topic of my thesis.

Another nice thing about GSoC is that it allowed me to do work related to my academic pursuits, exactly as defined in the goals of the program. Throughout the project I worked on a research paper for the course IN4071: Internet Technology. Together with a colleague from the Norwegian University of Science and Technology (NTNU), who was in Delft as an exchange student, we explored the Service Component Architecture and Apache Tuscany. Recently we learned that we received a 9 for the course! Those interested in taking a look can find our research paper here: Exploring SCA and Apache Tuscany.

Back to the project, err, process. The GSoC program has two go/no-go moments in which students and mentors fill in a mid-term and a final evaluation. These are surveys with detailed questions about the progress that has been made in the project, and also provide an opportunity to make recommendations and express specific views with regards to the program. Among the questions there was one that got me thinking about the benefits of combining academic pursuits with practical work experience, specifically when collaborating on an open-source project.

Below is an excerpt of the answer I gave to the question:

  •  What advice would you give to future would-be Summer of Code students who would like to work with your organization?

I would advice them to consider the program as a starting point for their Master (or Bachelor) thesis. Combining a graduate (or undergraduate) education with the practical experience acquired in an open-source project is a great mix. For example, MIT offered an experimental course [1] this spring on ‘Building mobile applications with Android.’ Students worked on a project idea and focused on bringing it to the prototype phase with the help of mentors, professional software developers from the Boston area. This closely resembles the GSoC setup for Apache Tuscany, and for many other GSoC projects, in which projects involve building software applications and students have 2 mentors with whom they can consult throughout their project. I would recommend students to get involved with Apache Tuscany, or any other mentoring organization for that matter, well before their project starts. Students with less experience in open-source could consider assembling a project team. At TU Delft, for example, there is a Design Challenge [2] inspired on Standford’s "d.school" in which students work together on projects from companies over a period of 4 months. The same setup could be used for groups of students starting early on their GSoC project with Apache Tuscany.

Earlier today I sent my final email for GSoC’08. In all, it was an incredible experience, well worth repeating, and also recommendable for any students looking for a cool (even cold) summer internship. I liked the program so much that I’m considering being a mentor next year. So if you’re interested in applying to the next GSoC and have any questions, please don’t hesitate to contact me: o.v.castanedavillagran@student.tudelft.nl

 

References 

[1] http://people.csail.mit.edu/hal/mobile-apps-spring-08/

[2] http://designchallenge.tudelft.nl

Breakthrough technologies: Apache and my hardrive

One of the cool things of studying in the Netherlands is the opportunity to attend all sorts of conferences and events that are hosted here or close by in Germany, Belgium or the UK. For me it started with an Apple Tech Talk in Amsterdam back in November, in which Apple engineers detailed the latest developer information for Leopard, Apple’s most recent operating system. Soon after I found out about ApacheCon Europe, a conference that has been hosted in Amsterdam for several years. 

I signed up for the conference and was fortunate to be accepted as a staff volunteer. This included being able to attend the conference sessions. To put it shortly, I had a blast! In the mornings I was handing out t-shirts and conference programs, then i’d be doing session monitoring, introducing speakers, in summary I was helping with the behind-the-scenes stuff that makes it all happen. This way of attending a conference definitely beats the hell out of going as a normal participant. Because you have a badge that says you’re staff, people walk up to you to ask questions. Later on its very natural to start conversations with those people. For instance, I met a developer that is actively working on my current Google Summer of Code project. By the second day of the conference some conference attendees were already greeting me on a first name basis.

The conference opened up with a keynote from Cliff Schmidt: "Using Audio Technology and Open Content to Reduce Global illiteracy, Poverty and Disease." As part of Literacy Bridge, a team of volunteers including Cliff are developing a talking book device for knowledge sharing and literacy learning to be used in the developing world. I’ve always wanted to help out in a project like this, more so now that I’m here thanks to a scholarship, and so I decided to volunteer directly with Cliff. In doing so, I suggested Cliff to try out my home country, Guatemala, as a next stop for implementing the talking book device. The project is really doing great things, you can read more about it on their website at http://literacybridge.org/.

For the first day of the conference I was assigned to the ‘Community and Business Room’ to do session monitoring. There I met Karl Fogel and was amazed by his talk on the myths behind copyright. Karl is a famous open-source developer turned copyright activist, he now runs a foundation called QuestionCopyright which promotes the understanding and betterment of copyright. Karl’s ease of handling the business aspects behind open-source software gave me the idea to ask him for help with my upcoming MoT thesis. Karl was more than happy to help and asked me to contact him by email with the details. First off he advised me to read his book "Producing Open Source Software" which I’m now actively studying for my thesis. Already on this first day the conference had payed off hugely!

The second day was just as fun, I attended some talks on the inner workings of the Apache Software Foundation and an interesting keynote on research performed here in the Netherlands about collaborative innovation. At the end of the day was very excited to attend Roy Fielding’s REST talk. I studied Roy’s PhD thesis for an assignment on the Influence of Architecture on Design, so as you can imagine I was pretty much happy to get the concepts and applications of REST directly from the man who came up with it! Towards the end of the conference I walked up to Roy to thank him for a great talk and ask him some questions about REST. 

The last day of the conference featured the closing keynote on the history and future of the Apache project, given also by Roy Fielding, one of the founders of the Apache HTTP server project and of the Apache Software Foundation. The keynote was amazing. Roy detailed the project from its beginnings and pointed out that a major change was needed as interest in the Apache project has been steadily declining. During the keynote, Roy also presented general trends in the collaborative development of the Apache HTTP server. Several insights into the nature of open-source software development were given, including for example how project goals are determined and how decision-making occurs. Additionally, Roy was very keen on pushing his own protocol, the waka protocol, for the new version of Apache. I was also impressed to see how the keynote was serving as a staging ground to propose a new version of Apache: version 3.0. Roy was proposing major changes in focus, like only supporting a limited number of platforms and features and having the Apache HTTP server use configuration defaults, like Ruby on Rails does Roy said. In essence, decisions were being made during the conference and new courses of action proposed during the keynote with regards to the future of the Apache project.

This left me interested in how the project had developed and what might happen in the future. Coincidentally, a week later, the assignment for the course on R & D management was presented in class. We had to investigate the development and diffusion of a breakthrough technology. Special care, our professor advised, should be given in choosing the breakthrough technology. The Apache HTTP server was the perfect subject for such an assignment: it was a technology that delivered new to the world functionality causing a major shift in price/performance for the software and Internet industries.

A couple of months passed and the time came around to hand-in the assignment. I would’ve never imagined that my hard drive would crash just the day before! Just as I was doing the finishing touches my computer crashed and wouldn’t boot up again. I hastily consulted with friends and colleagues, even experts in the field of data recovery. I was desperate and even tried to freeze the hard drive and see if that would somehow revive it. I wasn’t lucky, all my data was lost and the latest backup I had was a couple of months old. Fortunately, our professor gave a 2 week extension to turn in the assignment. While this was good news, it meant doing almost half of the assignment again. This meant that hard work was needed. The phoenix version of my assignment had to be at least better than the previous one. Two weeks after I turned an improved and mostly re-done version of the assignment.

A few days ago, I was very happy to see that our assignment had been given a grade of 9! Attending ApacheCon Europe had payed off, again! It’s amazing to see just how many things came from the simple act of attending a conference: I made contacts for my thesis, started research on it as well, met people that I’m now working with, learned more about interesting technologies, became a volunteer, and performed research on the development and diffusion of a breakthrough technology. I was so pleased with the outcome that I’m now attending the RailsConf Europe in Berlin this coming September. It’s going to be hard to match the experience for ApacheCon Europe, but it will undoubtedly be lots of fun. 

Those interested in reading more about the development and diffusion of the Apache HTTP server, will enjoy reading our final research assignment. The keynotes for ApacheCon Europe are also available for free.

Bees and Hyves

Have you ever realized that a speaker is outstanding after listening to only a few of his or her words? Have you been interested in a lecture after just a few minutes of sitting in? Sometimes a witty remark right after the first few sentences is enough to attract the attention of a crowd. Great speakers share something that makes their presentations captivating without a need for eye-catching slides, gimmicks or fireworks. They posses a certain air about them, speak clearly and articulately, and make clever use of humor. Several entrepreneurs are bees in this hive (or Hyves in this case). This includes Koen Kam, one of the founders of Hyves – the Dutch social networking site – and an alumni of TU Delft. Koen was the guest speaker at ABC Delft's latest lecture held at the Art Centre Delft on June 5th.

Koen went through the phases of a high-tech start-up based on his experience with Hyves. This included booting-up the start-up (like you would with a computer after pressing the "on" button), dating angels, avoiding red flags, operating in stealth-mode, airing a product, and eventually scaling-out. The lecture started with a brief introduction about the advantages students have in starting a company. Big words were mentioned – optimism, future, and risk. Koen considers students as "empty vessels" that haven't been spoiled by corporate muddling (he started his own company when he was 19). This quickly captured the student crowd's attention. Then Koen took a sharp turn by confidently stating that a Business Plan is not needed, at least not in the very early stages. A few mumbles were heard from an audience that featured students from the course on 'Writing a Business Plan'!

Being smart and agile as an entrepreneur includes knowing when to seek funding. Money from venture capitalists is hard to come by for a young company without a product and no track-record. In short: you won't get it! according to Koen. Some entrepreneurs, he said, are lucky to date an "Angel" – provided they have something to show and can convey huge market potential. This can be done through a demo. With this mention of a demo a former classmate from Starting New Ventures whispered in my ear "…this is not what Ken taught us" (in reference to Ken Morse's example of scaring a jury away with a demo). This guy was missing the point, as Koen was talking here of the very early stages of a start-up when a company barely has one customer whereas Ken's lectures focused on selling an existing solution to a jury in the context of a global company. This confusion was similar to the one caused by the controversial 'you don't need a Business Plan' statement. This made me realize that early in the game a demo can speak more than a Business Plan, especially in the case of a high-tech start-up. For a cross-check with history Koen mentioned the case of Google, Apple, and other legendary high-tech companies. 

Then came the red flags, always something to watch out for. Koen mentioned a few – big salaries, marketing budgets, and change of plans. While operating in stealth-mode entrepreneurs should hold back, work hard, stick to their plans, and tell everyone about their ideas. Pitching, Koen said, gives entrepreneurs the chance to spread their vision and get some feedback (which can sometimes include the horrible truth as I would soon find out).

The most interesting part of Koen's lecture was his detailing of the scaling-out phase. At this point Koen mentioned Malcolm Gladwell's "The Tipping Point." This is a great book about how little changes can make a big difference. Having read the book, I knew what Koen meant when he talked about the importance of leveraging different types of people, hiring A+ players, and how getting things done and a short-time to market can be crucial to success. The first is about identifying different types of people; some are better at selling and communicating, others at analyzing and informing. Knowing the right people and leveraging their "type" can do wonders for entrepreneurs, like for example igniting the flame of word of mouth marketing.

My favorite part of the lecture came during the Q&A. Koen was asked about his dreams. He answered that his dream is to play at the same level as companies in Silicon Valley. Resisting acquisition shows he truly believes in his company and his answer confirmed it.   

After the session was over I walked up to Koen and thanked him for a great lecture. I even took the chance to pitch my idea to him, following what I'd just heard. Koen gave me the horrible truth! Although my idea was nice, he said, I shouldn't loose sight of the bigger picture and instead think of all the options and consider market winning propositions. I really appreciated his comments because I found his straight and direct feedback more useful than a complement. As for the idea, I won't pitch that here just yet as I'm now developing it further…

Until next time ;-)  

The Elevator Pitched me a curveball…

Giving an elevator pitch is not as easy as it seems. I confirmed this personally in YES!Delft‘s "Made in 60 seconds" elevator pitch contest. As the start of the activities of the High-tech Entrepreneurship week, coincidentally, it also marked the start of my Google Summer of Code project - Incubating an Android in Delft - which was the subject of my pitch. Anyway, as my name was called (I was the first contestant) I stepped up to the floor and started pitching-away…A few seconds into it I realized some of what I had prepared had vanished from my mind!! I managed to improvise but my pitch did not make as much sense as it should’ve, I confirmed this from questions from the jury at the end of first round.

Then, watching the other contestants’ pitches I noticed that the big-screen clock seemed to go slower than normal. That made me realize that time was the problem! It’s not that pitching-time is some sort of a time-warp or that talking in public is some obscure new art, but instead it’s a really simple problem: when pitching or just speaking in public some people tend to speak much faster (at least I usually do) than they usually would. Speaking clearly and articulately is something to keep in mind as it is crucial in getting one’s point across. I think this was the key take-away from the contest for me, in addition to giving concrete numbers (like money and so on) or data to create some sense of urgency.

I sticked around and enjoyed the rest of the event. During the third round there were some excellent pitches, even some really funny ones that deserved a standing ovation but didn’t manage to make it through to the final round. Feedback from the jury was really useful throughout the event, and with one of the jury members it even included up-close-and-personal advice on tactics and techniques to use when pitching – cocktail party, elevator or otherwise. After a final "why should I win" pitch, it was one of my previous classmates from Ken Morse’s ‘Starting New Ventures’ course that won the honors. All in all it was a great experience, surely worth repeating if only to hear about the great ideas floating around campus here in Delft and taking another swing at that convex elevator curve ball!

For those interested, my pitch and more information about my project can be found here: http://androidindelft.googlepages.com 

Incubating an Android in Delft

After some weeks of anxiously waiting, the results from Google's Summer of Code are finally out. And…my proposal was chosen!

I applied for a project from Apache Tuscany, an incubator project of the Apache Software Foundation. Apache Tuscany is developing an open-source implementation of the Service Component Architecture (SCA) specification. One of the aims is to create a 'simple' service-based model for the construction, assembly and deployment of services. This enables developers to create service components and to assemble components into applications called composites.

More specifically, my project is about allowing Google Android applications to easily consume business services. In creating Android, Google and the Open Handset Alliance, were looking to provide developers with a platform and the set of tools needed to develop new types of experiences. The Android software stack includes the building blocks needed to achieve this, and is sufficiently open and extensible to allow those pieces to be combined in new and innovative ways. In my opinion, the Apache Tuscany project can empower users, another all-important source of innovation, by providing them with a "Service Development Kit" that allows them to easily and intuitively combine services in such a way as to create new types of experiences.

The Apache Tuscany incubator project implements the SCA specification and enables users and developers to create service components and to assemble components into applications called composites. Such applications in Android can be assembled out of Google services available as SCA components, provided that Android has a thin SCA core/runtime to perform such assemblies allowing applications to easily consume business services. Developing this SCA core/runtime for Android is the focus of my project for GSoc.

I envision a future of open source services created by groups of individuals pursuing a common set of goals and possibly with the same set of beliefs. Such services would be created in an environment of open communication and collaboration, much like the way software has been created for years in open source projects and to a similar extent how services have been created by Google. Furthermore, the principles and philosophy of open source would drive efforts to produce these so-called open services, with the cooperation of individuals, corporations, universities and governments. Countless applications can arise out of such efforts from which society at large would benefit. This is a very exciting time to contribute to projects like Apache Tuscany!! I will proudly wear the GSoc t-shirt after this summer!

I would like to thank the Apache Tuscany community, especially the project mentors for GSoc. Their help was vital in creating my proposal. I'm looking forward to this great opportunity!

My accepted application proposal can be found here:

Allow Google Android applications to easily consume business services

Analysis of Operational Value Creation at Yahoo!

 

A resource-based analysis of a company’s strategy is congruent with an analysis of value creation in the company’s core: its operations. In other words, operational value creation lies at the heart of the ends and means of the enterprise; creating value for its shareholders through its competencies. Again, this confabulates with the resource-based view on strategic analysis. Furthermore, it stresses the importance of competencies in creating value through a combination of efficiency and differentiation. Analyzing the value creating performance of such processes yields important insights into the company’s strategy.

 

Operational Success Criteria

Yahoo!’s operational success criterion consists of strategic objectives and goals that contribute towards meeting those objectives. The most recent strategic objectives were established a few months after co-founder Jerry Yang stepped up as CEO1 and were announced during Yahoo!’s Q3 earnings conference call in 2007. The strategic objectives are:

 

  • To become the starting point for the most consumers on the Internet;

  • To establish Yahoo! as the “must buy” for the most advertisers;

  • To deliver industry-leading platforms that attract the most developers.

 

Several goals have been established to achieve these strategic objectives. Some of those goals2 are:

 

  • Continue to invest, innovate, and create whatever is necessary to gain more consumers.

  • Create a motivated community of developers all building uniquely compelling applications that reach hundreds of millions of Yahoo! users.

  • Accelerate overall advertising revenue growth by the end of 2008.

  • Leverage strengths and anchor properties to create the most compelling and innovative products and services.

  • Grow visits to key Yahoo! starting points and properties by approximately 15% per year over the next several years.

  • Increase the percentage of total online advertising “demand touch” to 20% of the addressable market over the next several years.

  • Change the game in Search and increase overall share of search queries.

  • Grow market share of total online advertising.

  • Generate the maximum long-term value for assets.


Performance on Operational Success Criteria 

Since Q3 of 2007 Yahoo! has been “putting substantial talent and resources behind two of the major strategic objectives.”3 The progress made up to Q4 of 2007 in these two strategic objectives is discussed below.

 

To become the starting point for most consumers on the Internet, Yahoo! is focusing on five properties: Home Page, Search, Mail, My Yahoo!, and Mobile. Related properties, also referred to as anchor property verticals, including Sports, Finance and News, are also being leveraged as key starting points.

 

An example of value creation that relies on this first strategic objective is Yahoo!’s Home Page. It remains the most visited web page on the Internet, in part as a result of “deliberate efforts to program relevant information from across the web, regardless of whether the landing page is a Yahoo!-owned site or a third party site.”4

 

Value creation is also present in some of the company’s key initiatives, among them Search, Mail and Mobile. In Search, the company is encouraging innovation, most recently by investing in an open source development of grid computing that improves the throughput and scalability of Yahoo!’s services. In Mail, Yahoo!’s recent acquisition of Zimbra is expected to drive innovative developments. And finally in Mobile, Yahoo! introduced the Yahoo! Go a platform for “personalized communications, entertainment and information services to mobile devices, televisions and desktops.”5

 

To establish Yahoo! as the “must” buy for the most advertisers, the company’s second strategic objective, it has sustained increased financial gains on Panama6 - the company’s new marketing ranking system for online advertisements – representing a 20% improvement over previous quarters. More so, Yahoo! has continued to build a partner network for ad display with companies like eBay and AT&T. In addition, the grid computing initiative is also expected to create value in search and marketing.

 

Performance Analysis

The strategic objectives driving Yahoo!’s developments in value creation are part of a coherent strategy that is ultimately aimed towards achieving differentiation. By becoming the starting point for users and by satisfying their needs, Yahoo! is attempting to create lock-in through quality that users will start-off with and come back to, which would essentially mean conquering one of the greatest challenges in the Internet industry, namely the lack of lock-in effects. Yahoo! realizes that “revenue from your locked-in customers is the return on the investment you have made in them.”7 To this end, the company is investing in differentiation.

Even though the home page of Yahoo! is the most visited site on the Internet, the company’s search capabilities, which are the defining competency required for online advertising, are still lagging behind Google’s. The Panama project was launched for this reason and will be a defining factor in Yahoo!’s evolution in value creation.

 

Acquisitions and partnerships are beneficial competencies for value creation in the rapidly changing Internet environment. Yahoo! has effectively leveraged and integrated these resources into its operating routines for value creation.

 

Financial Performance Analysis

The table below shows Yahoo!’s key financial figures for the years between 2002 and 2007. The analysis below will be used in comparison to the Operational Cash-flow Development analysis in the next section.

 

Table 1

Yahoo! Inc (NASDAQ: YHOO)

 

 

 

 

 

 

 

Company

Year

Net Profit

EBIT

EBITD

EBITDA

ROI

ROE

ROA

Gross Margin

Total Operating Expense

Break-even

 

Yahoo!

2002

$107

$88

-$21

-$21

0%

2%

2%

83%

$865

$1,042

 

Yahoo!

2003

$238

$296

$136

$136

2%

5%

4%

77%

$1,329

$1,727

 

Yahoo!

2004

$840

$689

$523

$378

5%

12%

9%

62%

$2,886

$4,655

 

Yahoo!

2005

$1,896

$1,108

$884

$711

8%

22%

18%

60%

$4,150

$6,917

 

Yahoo!

2006

$751

$941

$639

$401

5%

8%

8%

58%

$5,485

$9,456

 

Yahoo!

2007

$660

$695

$286

$36

2%

7%

7%

59%

$6,274

$10,634

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Yahoo!’s net profits rose to a high of $1.9 billion in 2005 after which they declined to a four year low of $660 million. The company’s 8% ROI, the highest to date, coupled with their all time high ROE of 22% in 2005, and the associated decline in subsequent years, are witness to the fact of the fierce competition inherent in this industry sector. Nevertheless, it is also related to Yahoo!’s strategic changes during those years. Both competition and strategic changes have been crucial developments in the company’s evolution.

 

The 18% ROA in 2005, shows that Yahoo! consistently increased efficiency after the Dot-com crash in 2000. The years after that show declining efficiency, again related to competition and strategic changes. Furthermore, the gross margin shows, to some extent, the difference between fixed and variable costs and in turn gives us an insight into the company’s break-even.

 

In relation to the break-even point, gross margin has remained stable at around 60% for the past 4 years while operating expenses continue to rise, clearly indicating an increasing drop in efficiency.

 

Operational Cash-flow Development Analysis 

The table below shows the key financial figures for Yahoo! for a more extensive period, covering the years between 1998 and 2007. Based on these numbers Figures 1 and 2 show the Performance-volume and Differentiation-efficiency relations.

 

Table 2

Yahoo! Inc (NASDAQ: YHOO)

 

 

 

Company

Year

Turnover

Employment costs

Depreciation

Non operational results

Tax

Net profit

 

 

 

 

 

 

 

 

 

 

Yahoo!

1998

$203

$147

$0

$0

$18

$26

 

Yahoo!

1999

$589

$271

$43

$38

$36

$61

 

Yahoo!

2000

$1,110

$489

$69

-$34

$188

$71

 

Yahoo!

2001

$717

$474

$131

$77

$11

-$93

 

Yahoo!

2002

$953

$545

$109

$88

$71

$107

 

Yahoo!

2003

$1,625

$734

$160

$46

$147

$238

 

Yahoo!

2004

$3,575

$1,153

$311

$476

$438

$840

 

Yahoo!

2005

$5,258

$1,556

$397

$1,092

$768

$1,896

 

Yahoo!

2006

$6,426

$2,147

$540

$140

$458

$751

 

Yahoo!

2007

$6,969

$2,662

$659

$131

$337

$660

 

 

 

 

 

 

 

 

 

 

 

 

Yahoo!’s Performance-volume relation shown in Figure 1 below, shows the confusion that came about with the Dot-com crash in 2000. After a period of increasing performance and volume, up to 2005, Yahoo!’s situation dramatically changed with a significant drop in performance with increasing volume. This coincides with the drop in efficiency identified in the financial break-even analysis and is supported by Figure 2. Furthermore, it explains the change in leadership and strategy in late 2006 – point at which the situation started improving.

 

 

Figure 1

 

 

 

The figure below shows at one extreme a drop in efficiency and increased differentiation that came about with the Dot-com crash, of which Yahoo! was a survivor. At the other end it shows a drop in efficiency, after a period of steadily decreasing differentiation, that is the result of increasing competition and strategic changes within the company. These conditions caused a dramatic turn in efficiency after a high in differentiation-efficiency in 2005. This sudden drop in efficiency, which has continued since 2005, confirms the break-even analysis and the analysis from Figure 1.

 

Figure 2

 

Strategic Stability Analysis

Yahoo! has changed its strategy several times over the past few years. Most recently, in October of 2007 it announced yet another shift in strategy. Although such a change was in order, especially with the new vision of co-founder and new CEO Jerry Yang, such changes have a deep and far reaching impact in the company’s value creation capabilities. In this case, the changes have been mostly positive but still require further development.

 

The guidelines provided by three core strategic objectives, and its supporting goals, are effective in setting a clear direction for the company. However, these same objectives must be maintained in coming years in order to be most effective.

 

Conclusions

Yahoo!’s financial performance has significantly decreased since 2005. The company has not been successful according to its own performance criteria, as is evidenced by its increasing break-even point and fairly stable gross margins. Furthermore, the company has not been successful according to standard performance criteria such as ROI, ROE, ROA, EBIT(D)(A), all of which have decreased from highs in 2005.

 

The company’s strategic orientation, after new CEO Jerry Yang was appointed and the company revamped its strategy, is to increase differentiation. This is strongly supported by all three of the company’s strategic objectives and most of the current company goals, both geared specifically towards differentiation. It is clear that the company’s generic strategy is predominantly focused on differentiation.

 

The company’s strategic direction was not successful after 2005. This is evidenced by the changes in top management and strategy, and supported by this financial value creation analysis. Furthermore, the company’s strategic direction has been fairly unstable in the past 5 years indicating the importance of adaptability and agility in this highly unstable environment.

 

 


 

References 

 

1 Yahoo! Co-Founder Jerry Yang Named Chief Executive Officer,” Yahoo! press release, June 18, 2007, http://yhoo.client.shareholder.com/press/ReleaseDetail.cfm?ReleaseID=249882accessed March 1, 2008.

2 Adapted from Yahoo! Q3 2007 and Q4 2007 earnings calls.

 

 Demand touch includes ad revenues and associated expenditures.

 

3 Yahoo! Q4 2007 Earnings Cast,” Yahoo! investor relations, January 29, 2008, http://advision.webevents.yahoo.com/yahoo/earnings/2007/Q4/accessed February 28, 2008.

4 Yahoo! Q4 2007 Earnings Cast,” Yahoo! investor relations, January 29, 2008, http://advision.webevents.yahoo.com/yahoo/earnings/2007/Q4/accessed February 28, 2008.

5 Yahoo! Expands Reach Beyond the Browser with Launch of Yahoo! Go; Yahoo! Go Brings Seamless and Personalized Communications, Entertainment and Information Services to Mobile Devices, Televisions and Desktops,” Yahoo! press release, January 6, 2006,http://yhoo.client.shareholder.com/press/ReleaseDetail.cfm?ReleaseID=183436, accessed March 2, 2008.

6 Yahoo! To Launch New Search Marketing Ranking Model in the U.S. On February 5,” Yahoo! press release, January 23, 2007, http://advision.webevents.yahoo.com/yahoo/earnings/2007/Q4/accessed March 1, 2008.

7 Shapiro C., Varian H. R., (1999) Information Rules: A Strategic Guide to the Network Economy, Cambridge, MA: Harvard Business School Press.

Architecture and Enterprise Engineering

 

Architecture and Enterprise Engineering

 

Architecture is a verb, not a noun. Though grammatically incorrect, this symbolic statement is true to the extent that architecture prescribes the state space and transition space of a system. Such is the case of Enterprise Engineering, which sees enterprises as systems and defines architecture as the “normative restriction of design freedom.”[1] The fact that architecture can be thought of as a verb in this field implies that it is part of a set of actions that influence the world surrounding its use. In this sense applying a prescriptive notion of architecture. This is evident from the intimate relation of architecture to the design and engineering of systems, where normative restrictions are the most actionable and are present at different conceptual and operational levels. 

Modeling an enterprise as a system is useful to acquire knowledge of its function and to develop an understanding of its construction. Correspondingly, different conceptual models of systems are employed to better grasp this dichotomy; these are the functional and constructional models of systems. Through these models the enterprise engineer can acquire and develop a conceptualization of the system that is an “understanding of its construction and operation in a fully implementation independent way.”[1] This is done in the context of a ‘generic system development process’ in which design and engineering are central activities.

This implementation independent conceptualization of a system facilitates a high level view of its core elements and their inter-relationships. This yields an understanding of the system that is in line with the notion of system ontology as applied in [2]. The underlying (enterprise) ontology will then contain the essence of the enterprise and together with the applicable (enterprise) architecture[2] can be used for (re) designing and (re) engineering a system, such as an enterprise. These notions of Enterprise Ontology and Enterprise Architecture have evolved into complementary notions in the field of Enterprise Engineering; together they effectively consider an enterprise as “a designed, engineered and implemented system.”[1] 

Such an ontology-based system can provide, among others, services over the World Wide Web, a context in which the notion of ontology has largely been relied upon, most notably in the Semantic Web. The recent Pragmatic Web Manifesto[3] signals a fertile ground for the application of Enterprise Engineering, both by relying on ontology as a “formal, explicit specification of a shared conceptualization”[1], and the prescriptive notion of architecture as a source of agreed upon design principles.  One such application could be to provide the principles for the (re) design and (re) engineering of (virtual) enterprises on the Pragmatic Web. It is an exciting time to study the field Enterprise Engineering. 

This report will concentrate on exploring the role of architecture in Information Systems Development (ISD) by concentrating specifically on the design and engineering of business processes and ICT-applications. I will argue that the notion of architecture and the act of architecturing are essential in the (re) design and (re) engineering of business processes and ICT-applications, and are key enablers, to the same effect, of organizations and enterprises. This report constitutes the first assignment for the course Enterprise Architecture & Web Services imparted by Prof. Jan Dietz. The content is largely based on the publications and lectures of Prof. Jan Dietz, Dr. Antonia Albani and Dr. Jan Hoogervorst.

This report is organized as follows: The section on architecture discusses the definition of architecture and other, often conflicting, definitions. Then there is a discussion of architecture framework, leading to a more formal definition of architecture, and specifically discussing the eXtensible Architecture Framework. Following this there is a discussion of the notions of model and system needed for the subsequent discussion of the application of architecture to the business processes and ICT-applications of an enterprise. Some conclusions are drawn from the report and given in the last section.

2. Architecture

Definition

Conceptually, architecture is the “normative restriction of design freedom”[1] and operationally it is “a consistent and coherent set of design principles.”[1] These theoretical and practical definitions are best understood in a system development process. In such a process the creators of systems rely on abstractions and the knowledge they have about the world to create (intermediate) representations of systems. A useful artifact to this end is the notion of “model.”

There are several models used in Enterprise Engineering, among the most useful in the development of systems are the black-box and white-box models, regarding the function and construction of systems, and the ontological and implementation models, respectively at the highest and lowest level of abstraction of in the construction of a system. These models are useful for obtaining information about the system to be realized, even if only intermediately, and ultimately for gaining knowledge about the function and construction of the system, and at the highest level of abstraction, about its essence. All these models are used in the development of systems.

In the ‘generic system development process’ two types of systems are involved, namely a using system and an object system. The former making use of the functionality provided by the latter, which is the object of attention. There are well-delineated activities involved in this development process that are concerned with the design, engineering, and implementation of an object system. The first of these activities, namely designing, consists of two phases, determining requirements and devising specifications. Both phases are needed to bring about an object system, both in regards to its inputs and outputs as well as to its construction. These activities involve possessing knowledge about the using system and the object system, insofar as the function and construction of these systems is concerned. Hence, the black-box and white-box models are useful in the process of designing systems. These models are shown in the figure below in relation to the design of an Object System (OS) on the basis of a Using System (US). 

Figure 1 [8]

Models of differing levels of abstraction are useful in the engineering and implementation of systems, the other two activities involved in the ‘generic system development process.’ At the highest level of abstraction, therefore in a “fully implementation independent way”, an ontological model shows the essence of a system. The word ontology, from which the idea for this model comes from, refers to the nature of being and “requires us to make a strict distinction between the observing subject and the observed object.”[2] The importance of this distinction will be discussed in latter sections.

There is an ontological model, therefore the highest constructional model, for each type of system, as well as an implementation model. Several layers of abstraction separate both models. At the lowest constructional model, which is the implementation model, the assignment of technological means to the elements of the model is referred to as implementation. The construction of the implementation model from the ontological model, an activity that usually iterates between several intermediate models, is referred to as engineering. These activities are performed completely within the object system in the ‘generic system development process’ shown in Figure 2 below.

 Figure 2[8]

With this conceptual framework in place, it is now possible to define, and better understand, architecture as the “normative restriction of design freedom.”[1] The operational notion of architecture as “a consistent and coherent set of design principles that embody general requirements.”[1] also becomes clearer. It is now possible to give a more formal definition of architecture, in relation to which a fourth activity in the system development process can be described; these topics will be covered in the next section. More will be said about requirements and principles in the section on ‘Design and Engineering.’ 

Figure 3 [8]

Other Definitions

There are other definitions of architecture that go around; often they are conflicting or ill defined. Some examples include those of Zachman, IEEE P1471 and TOGAF; we will examine each of these in turn. The first, states that:

 

  • “Architecture is that set of design artifacts, ordescriptive representations, that are relevant for describing an object, suchthat it can be produced to requirements as well as maintained over the periodof its useful life.”[4]

 

This definition is descriptive, as such it says nothing about the construction of systems. Notions about the construction of a system are necessary for constructional system design.

Another definition is that of IEEE P1471, related to the one given in TOGAF, it states:

  • “Architecture is the fundamental organization of a system embodied in its components, their relationships to each other and to the environment, and the principles guiding its design and evolution.”[5]

 

This definition is also descriptive and has the added problem that it contains two definitions within it. The same issues are found with the definition of architecture found in TOGAF, which states that architecture is:

 

  • “A formal description of a system, or a detailed plan of the system at component level to guide its implementation. The structure of components, their interrelationships, and the principles and guidelines governing their design and evolution over time.”[6]

 

These definitions have in common that they describe architecture, in doing so they comprise what is known as the descriptive notion of architecture. For the design and engineering of systems, the prescriptive notion is also needed. Such notion is given by the conceptual and operational definitions of architecture that are part of the field of Enterprise Engineering. These definitions will provide a set of design principles that are useful in a particular architecture framework; this is the subject of the next section. 

 


3. Architecture Framework


Definition

An architecture framework is “a conceptual structure pertinent to a certain system type, consisting of areas of concern and a necessary and sufficient set of design domains pertinent to a chosen perception.”[7] Informally, it is “a structured checklist of issues that must be paid attention to or that must be taken into account.”[8] It gives rise to the activity of architecturing, or architecting, which is the “heuristic, participative process that defines the principles of architecture.”[7] 

Such a conceptual structure is formally defined[9] as a tuple <S,D,A> where:

  • S is a set of system types.
  • D is a set of design domains.
  • A is a set of areas of concern.

These dimensions are assumed to be orthogonal.

This purposefully broad and generic definition allows an Architecture Framework to cover systems, the design of those systems, and areas of concern for the design of those systems, and to do so in relation to their function and construction. Furthermore, it allows for the definition of an “AF as an extension of one or more existing AF’s, while also being extensible itself.”[8] 

The eXtensible Architecture Framework (xAF) is based on the definition of Architecture Framework given above. It consists of a Generic Architecture Framework (GAF) and rules for extending it. The generic architecture framework serves as a universal root xAF, referred to as xAF0, for other xAF nodes. It can be extended through rules for specialization and integration, respectively used to specify an Architecture Framework in more detail or to unite several Architecture Frameworks into a more elaborate structure.

The specialization rule is defined in terms of two Architecture Frameworks xAFi and xAFj. An xAFj is defined as <Sj,Dj,Aj> is a specialization of an xAFi defined as <Si,Di,Ai> if and only if:

Every system type s Sj is an exclusive subtype of some s Si 

  • Dj   Di.
  • A Ai.
  • xAFi is a valid xAF.

The integration rule is defined in terms of several xAF’s as follows:

An xAFn, defined[9] as <S, D, A> is an integration set of xAF’s defined[9] as {<S1,D1,A1>, <S2,D2,A2>, … , <Sk,Dk,Ak>}, if and only if:

  • S is the integral union of S1,S2, … , Sk.
  • D is the integral union of D1,D2, … , Dk.
  • A is the integral union of A1,A2, … , Ak.
  • xAFn is a valid xAF. 

An example specialization for xAF systems in general is that of an xAF node for organizations. In related fashion, an example of integration – involving the resulting xAF node – is the integration of an organization xAF node, an information system xAF node and an ICT-infrastructure xAF node.

The resulting structure, through use of the specialization and integration rules, is an xAF lattice. As shown in the figure below it is a top-down structure of xAF nodes, where a straight line represents specialization and an xAF node reached through dotted lines represents integration. 

Figure 4 [8]

The specialization and subsequent (possible) integration of xAF nodes becomes interesting to the extent that it specifies heterogeneous systems. This follows from the fact that the xAF0 node represents a homogeneous system, and the xAFi’s are, to differing extents, heterogeneous systems. It is necessary, then, to define homogeneous system.

The root xAF, referred to as xAF0, is a generic homogeneous system that can be defined[9] in relation to the formal definition of a homogeneous system as a tuple <A,C,E,P,S> where:

  • A is the class of atomic elements of the system category.
  • C A, called the composition.
  • E A, called the environment. E and C are disjoint

P is a set of products. Products are things that are brought about by the elements in C for the benefit of the elements in E.

S, called the structure, is a set of influencing bonds among the elements in C and E. By virtue of these bonds, the elements are able to act upon each other. Within patterns of such interaction the products in P are brought about.

As heterogeneous systems can be characterized by xAF nodes, it is interesting to gain knowledge about the architecture of those nodes – as it will be determinant to the design and (subsequent) engineering of systems. It is necessary, then, to define architecture more formally.

An architecture within a particular architecture framework can be defined more formally as a set P of design principles, such that every p  P:

  • concerns one system type s  S,
  • is a restriction of design freedom in one domain d  D, and
  • accommodates one or more areas of concern a  A. 

To evaluate the influence of architecture on the design and engineering of business processes and ICT-applications it is necessary to review the notions of model and system. This is the subject of the next section and will be covered in relation to the DEMO methodology and the concepts of Enterprise Ontology.

 

4. The Notions of Model and System

The notion of architecture is an essential part of the emerging field of Enterprise Engineering. This field has its roots in the DEMO methodology, which is part of the Language/Action Perspective (LAP). This theory is used as a basis for the design of Information Systems — “the Language/Action Perspective is an approach that is based upon analysis of communication as a basis for the design of Information Systems.”[10] Communication plays a central role in both DEMO and LAP, as a means to achieve mutual understanding in the latter, based on Speech Acts Theory and the Theory of Communicative Action, and in the former as the basis of three modes – essential, informational and documental – of communication in organizations. Furthermore, communication is at the root of the study of Enterprise Ontology, one of the pillars of Enterprise Engineering. 

The Design and Engineering Methodology for Organizations – DEMO – as well as Enterprise Engineering focus on the study of systems, specifically organizations and enterprises as systems. Both draw their foundational system notion from the ontology of Bunge, which in the case of Enterprise Ontology is explicitly referred to as ontological system, and organization  in the case of DEMO. The definition of organization (as a system) in DEMO is closely related to that of ontological system, although the definition of the latter in Enterprise Ontology is broader and more formal[2]. In any case, both methodology and theory pursue the same premise: the enterprise (or organization) as a system. The notion of ontological system[2] is defined below.

Something is a system if and only if it has the following properties: 

  • Composition: a set of elements of some category (physical, social, biological, etc.). 
  • Environment: a set of elements of the same category; the composition and the environment are disjoint. 
  • Production: the elements in the composition produce things (e.g., goods or services) that are delivered to the elements in the environment. 
  • Structure: a set of influence bonds among the elements in the composition, and between them and the elements in the environment. 

Once “something” has met the criteria above, it will either be suitable for design, and (possible) subsequent engineering and implementation, in Enterprise Engineering. In this field, the conceptual model of an enterprise, or ontological model as it is referred to, will be central to the design and engineering of a system. In fact, the notion of model is considered to be of comparable importance to that of system. A model is defined as — “Any subject using a system A that is neither directly nor indirectly interacting with a system B, to obtain information about the system B, is using A as a model for B.”[2] 

The ontological model of a system, such as an enterprise, serves as the conceptual model used to obtain the essence of the system in a coherent, comprehensive, consistent, and concise way, yielding as a result information about the ontology of the system.  This ontology is used in the process of design and engineering of systems. This will be done in regards to the function or construction of the system, depending on the phases or iterations of system design and engineering. To this end, two types of models are employed, namely the white-box and black-box models that were mentioned previously. 

In the case of the white-box model “it is a direct conceptualization of the ontological system definition”[2] and relates to the construction (and operation) of the system. The black-box model, on the other hand, relates to the function (and behavior) of the system.  The process of design, as mentioned previously, consists of two phases. These are split on the basis of reliance on the constructional or functional, hence white-box or black-box, model of the system, and correspondingly are called the analysis and synthesis phases of design. Similarly, the process of engineering of a system relies on the constructional, or white-box, model of the system. 

There are three key observations in the design and engineering of systems. The first is that the ontological model of the system is the best source of constructional information about the system, and hence should be used as the basis for the design and engineering of systems. The second observation is that functional and constructional design are always alternated with each other, this is best explained as follows — “a function cannot support another function directly, because functions do not have needs; only constructions do.”[2] And finally, the third observation is there is a philosophical stance behind the use of ontology in the design and engineering of systems that “requires us to make a strict distinction between the observing subject and the observed object.”[2]

Architecture and architecturing are of paramount importance in the development of systems. The activity of architecting results in the principles of architecture, which are used for (re) designing and (re) engineering systems. These principles are the result of using a framework “to support the devising of architectures.”[9] With this (richer) conceptual framework in place it is now possible to delve deeper into the details of design and engineering of business processes and ICT-applications. 

 

5. Design and Engineering

The design and engineering of business processes and ICT-applications are central concerns in the field of Enterprise Engineering. The major pillars of this field, namely Enterprise Ontology and Enterprise Architecture, are instrumental in this development process and provide much of the constructional knowledge necessary to effectively design organizations. At the core of the field, within the notions of Architecture and Ontology, lies much of its conceptual and practical power; one example is enabling the creation of Information Systems that deal with both social and technical aspects.

The two main areas of focus in the design and engineering of systems are business processes and ICT-applications. Business processes are mainly part of the realization of an organization and ICT-applications of its implementation. For the realization of an organization a layered integration of three aspect organizations is performed. The implementation of the organization is “the making operational of the organization’s realization by means of technology.”[2] It is imperative, then, to understand the different application and organization layers that are involved in an organization, namely B-I-and-D applications and organizations. Following the formal definition of architecture, within the architecture framework of an organization, and relying on the conceptual and operational notions of architecture, the design and engineering of business processes and ICT-applications will be bounded by the principles of architecture. To understand this, it is necessary to explore the systems, design domains, and areas of concern that are used to establish design principles.

 

 

System Types

For the system types the organization theorem[2] of Enterprise Ontology is used. According to the organization theorem, an organization consists of three layers: the B-organization (from Business), the I-organization (from Intellect), and the D-organization (from Document). The D-organization supports the I-organization and the I-organization supports the B-organization. These organizational layers are supported by ICT systems, which can be divided into four categories: B-applications, I-applications and D-applications and hardware. D-applications support I-applications, and I-applications support B-applications. Hardware is defined as a separate system type, on which the applications run.

The difference between the B-, I- and D-organization and B-, I- and D-applications lies in the nature of the activities that are executed at each level and the technologies they rely upon to improve the efficiency of the processes these applications deal with. The D-organization and D-applications are concerned with datalogical activities. Datalogical activities are activities where only the form of information is concerned. Examples include copying, storing etc. The I-organization and I-applications are concerned with the infological activities. Infological activities are activities that are concerned with the content of information. Examples are inquiring and calculating.  The B-organization is the part of an organization that performs ontological activities. Ontological activities are those activities where something is actually brought about. Examples are deciding and judging. Business processes are predominantly ingrained in the B-organization and are implemented through a mix of B-applications and actor technologies. 

The figure below shows a graphical representation of the organization theorem, it shows the three organizations and the kind of production acts their actors perform. The divisions of the figure relate to the coordination, actor roles and production.

Figure 5  [2]

Design Domains

As we have seen above, the different layers support each other: lower-level layers support higher-level layers. We have also seen that two design domains can be distinguished: a functional and a constructional design domain. This division also accounts for the organization theorem:  the functional layer of the D-organization supports the constructional layer of the I-organization; the functional layer of the I-organization supports the constructional layer of the B-organization.

The figure below depicts the system types and design domains of Enterprise Ontology. The design domains are represented by the F (for functional layer) and C (for constructional layer)[9]. 

Figure 6 [2]


Areas of Concern

Areas of concern are classifications of principles, defined by stakeholders of the Using System. They can partially overlap each other. Also, priority can be given, but is mostly not formalized. This leaves room for extension and specialization across the different types of organizations identified previously. 

 

6. Application to Enterprises

One application of architecture to enterprises is in (improving) the design and engineering of Web services. By providing business functionality, that comes from business processes, in the form of services, enterprises can extend their deployments of ICT-applications within the enterprise or to other enterprises. These extensions, however, require some alignment. 

 

Concepts like SOA aim at “aligning business and information technology (IT) in order to increase flexibility and to have the possibility of quickly adapt to fast changing market requirements.” However, the extensive and complex standards commonly used in SOA, such as SOAP, can inhibit the effectiveness of architectural principles and add overhead to network communications. An alternative is provided by the Representional State Transfer (REST)[12] architectural style.

 

A known problem with existing Web services implementations is the lack of agreement in the semantics of SOAP and WSDL. At the core of this agreement is the ontology of what is being agreed upon. To this effect, the notions of Enterprise Ontology and Enterprise Architecture are central to the alignment of (virtual) enterprises on the Pragmatic Web[3]. By agreeing on an ontology, communities of interest and practice, such as enterprises, could rely on the powerful notions of architecture in the design and engineering of (new) web services. Such efforts would be more efficient and effective through the use of architectural styles like REST. 

 

7. Conclusions

The prescriptive notion of architecture is a theoretically sound and practically useful notion. This is evidenced by the conceptual and operational definitions of architecture and their influence on the design and engineering of systems.

The architecturing of an organization is a transitive verb, whose object is the design principles of an organization. The term architecture also denotes action as is evident from its normative definition. Together, they are a fundamental part of the systems development process. The prescriptive notion of architecture is instrumental in this process.

The notions of ontological system and ontological model are useful, in fact essential, for the definition of architecture and architecture framework. 

 References

© 2011 TU Delft