1st Annual ECEGSA Conference Timetable
27 January 2006, ECERF W3-087
|9:00 - 9:15
||Analysis of thresholding strategies in associative classification
||A survey of fuzzy cognitive maps applications in engineering
|9:30 - 9:45
||Knowledge re-use and knowledge integration with the use of techniques of collaborative clustering
|9:45 - 1:00
||UWB pulse generator circuit
|10:00 - 10:15
||Nonlinear flatness-based control of a 6 pulse PWM voltage source converter
|10:15 - 10:30
||Offset and vibration compensation of a high-speed rotating shaft with active magnetic bearings
|11:30 - Noon
||Keynote Lunch (included in keynote ticket cost)
|Noon - 1:00
||Dr. Jorge Cham
||Keynote Seminar: The Power of Procrastination
(tickets on sale for $5 at the door / ETLC Info Booth)
|1:00 - 2:00
||Dr. Cham's book signing and sales in the ETLC atrium
|2:00 - 2:15
||Finite-difference time-domain simulation of a prism coupler
|2:15 - 2:30
||Single cycle THz pulse propagation in sub-wavelength sized scatterers
|2:30 - 2:45
||Experimental investigation and modeling of electron pulse generation using surface plasmons
|2:45 - 3:00
|| Submicron scale three-dimensional structure writing using two-photon absorption process
|3:00 - 3:15
||Thermalization of sputtered and reflected neutrals in a DC magnetron sputtering system
|3:15 - 3:30
||Modeling and switching controller design of a heating device for microfluidic reaction device
|3:30 - 3:45
||A computational method for rapid cellular identification
|3:45 - 4:00
||Automated microfluidic platforms for cancer diagnostics
|10:00 - 3:00
||ARVP, SPIE, NETERA - Come check out the amazing information these groups have to offer! All booths are located in the 3rd flor ECERF lounge, for the duration of the conference.
|Congratulations to the winner of the 2006 ECEGSA Conference Communication Award:
"Offset and vibration compensation of a high-speed rotating shaft with active magnetic bearings"
Judges commented that "Thomas' well-structured introduction solidly established the motivation and context for his presented work; he effectively connected with his audience, and his entire presentation worked well as a cohesive unit."
Presentations were judged based on the presenter's ability to clearly explain the content of their talk, the effectiveness of their presentation style, and their ability to describe the context of their work to a general multi-disciplinary audience. An example of the juding form is available here.
9:00 am - 9:15 am Rafal Rak
Analysis of thresholding strategies in associative classification.
Extensive manual work associated with describing and classifying scientific documents becomes a problem in the era of a constantly increasing amount of information. Huge number of incoming documents requires for intelligent classification capabilities. For example MEDLINE, National Library of Medicine’s (NLM) database, consists of approximately 13 million article references to biomedical articles dated back to 1966. Every year there are over 500,000 new article references which translates to about 1500-3500 references per day. The NLM employees manually assign Medical Subject Headings (MeSH), the NLM’s controlled vocabulary thesaurus, to each new article reference, which, given error prone nature of human work, makes this task very difficult and likely to become infeasible in future. This research aims at the problem of classifying of huge datasets of documents to a well-defined structure of categories based on a state-of-the-art associative classification. The paper focuses on comparison of different techniques of tuning a classification threshold determining the quality of a classifier to properly assign document to multiple categories.
The process of classification, i.e. assigning classes (or categories) to documents, is usualy preceded by a training phase. In this phase a classifier is being built from information contained in training dataset (dataset containing information about classes). As a result the classifier consists of a set of rules that are used to predict classes of new incoming documents.
In this study a state-of-the-art tool for associative classification, namely ACRI (Associative Classifier with Reoccurring Items), was employed. Associative classification is based on the idea of extending the structure of transactions, known from association rules, by adding class labels to each transaction. In case of classifying documents, transaction is equivalent to a set of words from a single document, and a class label is a category to which a document should be assigned to. A generated set of rules is used as a classifier to predict the class of new documents.
Multi-label classification, i.e. assigning a document to more than one class, necessitates for performing a series of experiments in order to tune classifier’s threshold indicating the number of classes fitting a single document. This threshold can be either global (defined per dataset) or local (defined per each class), as well as document-oriented (classes are assigned to a document) or class-oriented (documents are assigned to a class). A combination of each results in the three main thresholding strategies: score-based, rankbased, and proportion-based, namely, SCut, RCut, and PCut, respectively. In this paper the two of them are considered, SCut and RCut, along with their modified versions, proportion-rank-based and global score-based.
The paper is organized as follows: First sections describe problem of classification in general introducing the idea of associative classification and its extension recurrent-item associative classification. The following sections present a set of definitions of thresholding strategies. A series of experiments is performed using OHSUMED, a corpus of over 300,000 MEDLINE references, applying the previously defined thresholding strategies. The results are compared with those of other researchers. A discussion over the quality of using different thresholding strategies concludes the paper.
9:15 am - 9:30 am Wojciech Stach
A survey of fuzzy cognitive maps applications in engineering.
There are numerous methods for modeling of dynamic systems. In general, they can be divided into two categories. The first category concerns quantitative methods, which can be applied both to well-understood systems, such as mathematical programming techniques of operations research, and to less comprehensive ones, such as statisticallybased methods of data mining. However, quantitative methods suffer from substantial drawbacks. First, significant effort and specialized knowledge outside the domain of interest is required to apply these techniques. Second, some dynamic systems are nonlinear, which may make quantitative approaches impossible. Finally, numerical data are often hard to collect or uncertain. The second category of modeling includes qualitative methods, which are free from the limitations from the first category.
Fuzzy Cognitive Maps (FCMs) were introduced by Kosko in 1986 as an extension to Cognitive Maps. They stand for a qualitative modeling technique that combines the robust characteristics of fuzzy logic and neural networks. The main advantage of this particular method stems from simplicity of both the model representation and its execution. FCMs represent knowledge in a symbolic manner using concepts and mutual relationships. One possible model representation is through a digraph that consists of nodes linked by edges. The nodes represent concepts, or variables, relevant to a given domain, while the relationships between them are represented by edges. Once developed, the FCM model is simulated to perform qualitative analysis of a given system.
Despite simplicity, FCMs have gained significant interest and emerged as a powerful modeling and simulation technique. The scope and breadth of the applications demonstrates usefulness of this method. They cover a wide range of research and industrial areas such as engineering, medicine, political science, international relations, military science, history, etc. In this paper, different studies using FCMS in the engineering domain are presented. This lends itself to applications in software development, network intrusion detection systems, distributed autonomous robot systems, web-mining, industrial plants, supervisory control systems, electronic data interchange, failure modes and effect analysis, on-line fault diagnosis, electrical circuits, and others.
This paper is organized as follows. We begin with a short introduction to FCMs including a historical background and elaboration of FCM working principles. This is followed by different examples of application of this tool within the engineering domain reported in literature. Next, a comparative study is performed, which includes an extension on the usefulness of this particular technique in engineering. Finally, we summarize and conclude the paper.
9:30 am - 9:45 am Rai Partab
Knowledge re-use anb knowledge integration - with the use of techniques of collaborative clustering.
The idea of establishing cluster labels without accessing the original data distributed among several sites motivates us to develop a framework for Distributed Collaborative Clustering.
Coordinating cluster labels in this context means that we exploit the information conveyed by cluster labels available at other sites without accessing their original features. In the literature similar concept exists and are known as knowledge re-use and knowledge integration. We contrast other knowledge re-use and knowledge integration models from literature with the fuzzy collaborative clustering presented in this study.
The research proposed herein outlines several aspects for developing the cluster coordinating framework, which we call as knowledge re-use and knowledge integration model using collaborative clustering method. There have been many recent advances in distributed approaches, many frameworks have been developed, however there have been very few attempts to define collaborative clustering framework.
We first re-formulate the collaborative problem by forming a pertinent performance index. Next we identify the main difficulties and show that the performance index could be a suitable vehicle of quantifies the collaboration. We also elaborate on the role of the underlying optimization criterion and its components that guide the development of the partition matrices that leads to the maximal collaboration.
The proposed framework will be suitable for large high dimensional dataset in a dynamic distributed environment. The preliminary results show that high quality global models can be obtained without much loss of privacy and can improve quality and robustness.
In this paper, we present the literature survey by discussing different techniques of knowledge reuse frameworks, such as collaboration, consensus building and proximity formation. Next, we show preliminary experimental results of different collaborative clustering modes for different synthetic and real datasets coming from UCI machine learning repository. Finally, we discuss the contributions to-date and elaborate on future research plan.
9:45 am - 10:00 am Sheehan Khan
UWB pulse generator circuit.
The idea of using short duration (large bandwidth) baseband pulses for communications has been around since the development of sub nanosecond technologies in the 1960’s. In 2002 the Federal Communications Commission (FCC) authorized the use of Ultra Wideband (UWB) for commercial applications. The FCC defined a UWB signal to be any signal with a fractional bandwidth greater than 20% or a signal bandwidth greater than 500 MHz. Gaussian pulses are good candidates for UWB communication as they offer an excellent time-frequency resolution product . It has been found that the 5th derivative of a Gaussian pulse offers the most efficient spectrum use under the FCC spectral mask . This paper will be focusing mainly on the pulse generator block of a UWB system. The pulse generator block is an integral component of a UWB system. It is responsible for generating the short sub-nanosecond pulses called monocycles. Our pulse generator will consist of a digital network of inverters, NAND and NOR gates to generate triangular pulses. These pulses are then fed into a transistor output stage, thereby, generating the required 5th derivative of a Gaussian Monocycle.
 S. Bagga, G. de Vita, S.A.P. Haddad, W.A. Serdijn, J.R. Long, A PPM Gaussian Pulse Generator For Ultra-Wideband Communications
 H. Kim, D. Park, Y. Joo, All-digital low-power CMOS pulse generator for UWB system
 L. Stoica, S. Tiuraniemi, I. Oppermann, An Ultra Wideband Low Complexity Circuit Transceiver Architecture for Sensor Networks
 Yang & Giannakis, Ultra-Wideband Communcations: An Idea Whose Time Has Come, IEEE Signal Processing Magazine, November 2004, p26-54,
 Tommy K. K. Tsang and Mourad N. El-Gamal, Ultra-wideband (UWB) Communications Systems: An Overview., Microelectronics and Computer systems Laboratory, McGill University, Montral Quebec.
10:00 am - 10:15 am Edward Song
Nonlinear flatness-based control of a 6 pulse PWM voltage source converter.
An AC/DC Pulse Width Modulated Voltage Source Converter (VSC) is a power electronic device that is widely used in many industrial applications such as electric vehicles, induction motors, wind turbine generators, power transmission and delivery. The converter is a combination of a rectifier and an inverter which allows bi-directional power flow. A basic 6-pulse VSC consists of 6 Insulated Gate Bipolar Transistors (IGBTs) which are controlled by sinusoidal PWM. The system measurements are the three phase currents and the DC voltage which are used in feedback. The inputs to the system are the modulation index and the phase angle of the PWM control signal. Due to the switching harmonics present in the VSC, it is difficult to implement controllers using an exact model of the system. Therefore approximate average model are typically used. A PI control is commonly used in industry to control this system. In this paper, a model-based nonlinear control technique called Flatness-based control of the VSC is presented. The trajectory planning of the reactive current and DC voltage is also discussed. Simulation is shown to demonstrate the feasibility of the theory.
10:15 am - 10:30 am Thomas Grochmal
Offset and vibration compensation of a high-speed rotating shaft with active magnetic bearings
Active Magnetic Bearings (AMBs) are becoming increasingly important in many applications for the support of high speed rotors. AMBs typically operate by proportional-plus-derivative (PD) feedback control in order to position the geometric center of the rotor within the bearing. However, two problems arise with respect to this approach. First, there is static offset of the rotor from it's setpoint, which can be significant due to effects such as gravity and current biasing, and which can be made worse by operational loading of the rotor. The second problem is that an eccentric rotor tends to rotate about it's inertial center, leading to periodic unbalance forces that cannot be fully suppressed. A nonlinear AMB model is presented that models these effects as constant and harmonic disturbance forces, and a reduced-order nonlinear observer is constructed to estimate and suppress these disturbances. Experimental results demonstrate the effectiveness of this approach for a shaft rotating at 10,000 rpm.
Lunch and Keynote (Rm. ETLC-1-013)
2:00 pm - 2:15 pm Michael Quong
Finite-difference time-domain simulation of a prism coupler.
It has been demonstrated that free-space light can be converted into a guided mode by various means. Prism couplers are commonly used to couple light into waveguides. Finite-difference time-domain code was written to simulate a means of prism coupling (first designed by Bell Laboratories), through the use of an evanescent field, into a thin-film waveguide. Using Maxwell’s equations, the evanescent electric fields can be used to model the coupling of light into the thin-film waveguide. Coupling efficiency can be found for various conditions. Non-reflecting boundary conditions are discussed. Additionally, parallelization considerations will be discussed.
2:15 pm - 2:30 pm Kenneth Chau*
Single cycle THz pulse propagation in sub-wavelength sized scatterers.
Terahertz time domain spectroscopy (TTDS) is a growing field with numerous applications including biomedical imaging and material spectroscopy. Traditionally, TTDS experiments have used homogeneous non-scattering materials where only dispersion and absorption account for pulse distortion. However, for real world applications of TTDS, it is important to study the behaviour of THz propagation through nonhomogeneous media where scattering is significant. Here, we report on our experimental and numerical studies of on-axis, single-cycle, terahertz pulse propagation through strongly scattering random media. Experimentally observed effects such as spectral narrowing and consequent pulse broadening, the decay of integrated power, and pulse delay are accurately reproduced by a photon migration model.
2:30 pm - 2:45 pm Scott Irvine*
Experimental investigation and modeling of electron pulse generation using surface plasmons.
Ultrashort bursts of high-energy electrons can be used to study the intricate details of atomic/molecular events. Contemporary methods for generating ultrashort energetic electron pulses for time-resolved electron diffraction are based on electrostatic acceleration, which limits the electron pulse duration to several hundred femtoseconds. This results from the large experimental arrangements that are dominated by space-charge effects. We investigate an innovative technique that employs surface plasmon waves launched with ultrashort laser pulses. This allows for synchronous generation and acceleration of electrons, eliminating the necessity of electrostatic grids and reducing the accelerating region to a space smaller than the excitation laser wavelength. Experimental results indicate that this all-optical method can produce 2 keV electrons using 30 fs, 0.5 mJ pulses from a Ti:Sapphire laser amplifier. The findings are compared with test-particle code, which indicates that the electrons are accelerated within 300 nm, yielding acceleration gradients in the multi GeV/m range. These findings open the doorway for a variety of experiments involving ultrashort time-resolved electron diffraction and pulsed x-ray generation.
2:45 pm - 3:00 pm Zahid Chowdhury
Submicron scale three-dimensional structure writing using two-photon absorption process.
In a two-photon absorption (TPA) process, photochemical and photo-physical reactions are confined to the order of a laser wavelength in three dimensions (3D). When a femtosecond (fs) laser is focused on a small area of a sample, it produces non-linear effects like TPA where only a small volume at the focal point absorbs a large amount of energy compared to the other regions of the sample. This absorbed energy may change the physical or chemical properties of the material. The ability to modify the properties within such a small volume has made 3D writing possible on a submicron scale with minimal complexity. Kawata et. al.  have demonstrated structures with a resolution of 150 nm using a fs 800 nm Ti:Sapphire oscillator and SCR 500 resin. Recently, Chichkov et. al.  produced different structures with 200 nm lateral resolution using a commercially available photoresist, ORMACER®, using a similar Ti:Sapphire laser system. We are exploring the writing of photoresists and their applications. Structures produced in the first experiment are shown in Fig. 1 demonstrated controlled writing of simple posts and dots. In Fig. 1 (a) posts with 10 ?m separation were written. The first two posts are 2 ?m taller than the rest since the focal point is 2 ?m higher above the substrate than for the rest for these structures. The Smallest features obtained from the initial experiment are shown in Fig. 1(b). These structures have a width of 650 nm which is smaller than the Rayleigh’s criteria resolution of 1.5 ?m (? = 1.22 ?/NA, where NA is 0.65 for 40X objective). Better resolution structures produced using 100X objective and results comparing the performance of ORMACER and SU-8 resists will be presented.
Fig. 1. Different structures produced in the first experiment (a) posts with different heights (b) smallest dots.
 S. Kawata, and H. B. Sun, Applied Surface Science, 208-209, pp. 153-158, 2003
 J. Serbin, A. Egbert, A. Ostendorf, B. N. Chichkov, R. Houbertz, G. Domann, J. Schulz, C. Cronauer, L. Fröhlich and M. Polall, Optics Letters, Vol. 28, No. 5, pp. 301-303, 2003.
3:00 pm - 3:15 pm F. Jimenez
Thermalization of sputtered and reflected neutrals in a DC magnetron sputtering system.
Magnetron sputtering is still a common technique used to deposit a myriad of thin films for several applications. One important phenomenon occurring in the deposition process is the thermalization of energetic neutrals coming form the target. Thermalization occurs as energetic neutrals travel towards the walls of the chamber losing most of their initial energy through collisions. The interaction of sputtered particles with the background gas in a magnetron sputtering system is simulated using a hybrid Monte Carlo algorithm that incorporates the effect of gas heating and neutral depletion in the zone between the cathode and the substrate. This work is concentrated on investigating the average number of collisions needed to thermalize sputtered and reflected neutrals. Preliminary results show that including the rarefaction and heating of the process gas greatly influence the number of collisions necessary to thermalize energetic neutrals.
3:30 pm - 3:45 pm Patrick Pilarski*
A computational method for rapid cellular identification.
Despite rapid increases in the accessibility of miniaturized diagnostic devices, the analysis of generated data patterns is still a computationally difficult problem-- this is especially true in the case of the laser scattering patterns generated by wide-angle hand-held cytometers. In a wide-angle cytometry device, laser light passes along a fluid waveguide and through the cytoplasm of a cellular body (or other particle), where it refracts to become a complex two-dimensional intensity pattern . The resulting intensity peaks have been shown to contain valuable information regarding the internal structure of the scattering body . While it is mathematically intractable to directly solve the inverse problem for these cellular scattering patterns, it is possible to indirectly infer some cellular characteristics based on the high-level patterns in the captured intensity plots .
In this work we examine the impact of a computational intelligence system that blends computer vision and machine learning to take advantage of observable image properties; we show how a library of FDTD simulations (generated through the use of the WestGrid high-performance computing grid over the Netera network, as per the work of Liu et. al ) may be used to infer initial cellular structure. This work illuminates a potential approach to rapid image characterization, and examines how such characterization facilitates accurate lab-on-a-chip medical diagnostics.
 K. Singh, C. Liu, C. Capjack, W. Rozmus, and C. J. Backhouse, "Analysis of Cellular Structure by Light Scattering Measurements in a New Cytometer Design Based on a Liquid-Core Waveguide," IEE Proc.-Nanobiotechnology, vol. 151, 2004.
3:45 pm - 4:00 pm Vincent Sieben*
 K. A. Sem'yanov, P. A. Tarasov, J. T. Soini, A. K. Petrov, and V. P. Maltsev, "Calibration-free method to determine the size and hemoglobin concentration of individual red blood cells from light scattering," Applied Optics, vol. 39, pp. 5884-5889, 2000.
 V. P. Maltsev, "Scanning flow cytometry for individual particle analysis," Review of Scientific Instruments, vol. 71, pp. 243-255, 2000.
 C. Liu, C.E. Capjack, W. Rozmus, "3-D simulation of light scattering from biological cells and cell differentiation", Journal of Biomedical Optics, vol. 10, pp. 014007 (12 pages), 2005.
Automated microfluidic platforms for cancer diagnostics.
Several modern cancer diagnostic assays incorporate a variety of genetic analysis techniques. These traditional protocols are often laborious for staff, long in duration, and consume large volumes of reagents. When performing the genetic analysis techniques on microfluidic systems, a significant portion of the steps in these protocols can be automated. Furthermore, the small dimensions associated with microchips lead to short reaction times and lower volumes of reagent usage. As a case study, I will present one such miniaturization; specifically, my work on a DNA labeling technique known as PROD, or Post-column Reactor for On-Chip Derivatization . Ultimately, PROD requires lower volumes of reagents and can perform a complete analysis in minutes, rather than the hours typical of classical methods.
 Vincent J. Sieben and C. J. Backhouse, "Rapid on-chip postcolumn labeling and high-resolution separations of DNA," ELECTROPHORESIS, vol. 26, pp. 4729-4742, 2005.
4:00 pm - 4:15 pm Jingbo Jiang
Modeling and switching controller design of a heating device for microfluidic reaction device.
A heating device using thermal-electric modules (TEMs) is designed and implemented to serve the heating function of microfluidic devices. Microfluidic devices fabricated using photolithographic techniques are oriented towards rapid and low volume implementation of conventional molecular biology procedures. Many conventional tests have been adapted within the microfluidic platform, such as Polymerase chain reaction (PCR), DNA sequencing, and capillary electrophoresis (CE). These molecular tests require precise temperature control which is the main objective of this work.
The reason of using TEM is because of its current-controlled heating and cooling ability. When the supply current is positive, the upper side of TEM is heating and lower side is cooling, and vice verse. Using two TEMs in cascade is in account to the limited temperature variation range of one TEM and the optimal heating pump temperature difference between two sides of a TEM. Compared to commercial heating device for PCR or other tests, this custom made device is oriented to small-volume chip reaction instead of large-volume tube reaction and is much cheaper in price.
Due to the complexity in the theoretical modeling of the TEMs, black-box system identification method is used for local models at different operating regions. Considering the time-varying phenomena due to the limited heat capacity of the heat sink, temperature differences are chosen as output variables and linear characteristics are observed in random-binary signal input test between the new outputs and current inputs. The coupling effects between two TEMs necessitate the using of Multi-Input-Multi-Output models. Three 2 by 2 MIMO models are identified and used for controller design.
Switching strategy is adopted mainly due to the temperature control requirements of tests: fast transition rate and small steady-state errors. In the upper channel, PD controller is used in the transition region, while PI controller is used in the local steady-state region. In the lower channel, different PI controllers are switched between different regions. Bumpless transfer is also considered in the switch from steady-state region to transition region. No bumpless transfer is considered from steady-state region to transition region since instant changes are expected for fast heating/cooling effects. To compare the performances, one set of decentralized controllers and one MIMO H? controller are also designed and simulated. Closed-loop simulations of three controllers verified the fast response rate and good steady-state error results of the designed switching controller.
Test results The designed switching controller is implemented in C-code and programmed in a PIC-controller in a custom made circuit board with auxiliary ADC/DAC and other hardware devices. The controller works well and some good results in DNA amplification with PCR test are observed.
* denotes ECEGSA Executive (not eligible to compete for the communication award)