Quantcast
Channel: Virtualisation
Viewing all 277 articles
Browse latest View live

面向虚拟机的软件开发

$
0
0

作者:Thomas Burger

本文讨论了独立软件厂商为何要开发面向虚拟机环境软件以及能够使软件性能在虚拟机环境中实现最优化的软件技术,并要充分利用英特尔®虚拟化技术(Intel® Virtual Technology)。此外,本文还讨论了在软件开发和分配工作中使用虚拟机带来的优势。


引言:独立软件厂商(ISV)为何应该为虚拟环境开发软件?

使用虚拟机(VM)技术正在成为业界的普遍做法。现在越来越多的机构开始使用虚拟化来减缓服务器的增长速度(以及相关的能量消耗、空气调节、构建空间和占地等要求),从而为重要应用提供高可用性,并优化应用的部署和移植。虚拟化技术可以简化 IT 操作并支持 IT 机构快速响应不断变化的业务需求。希望节约资金、提高效率和释放宝贵资源的客户应该充分利用这一全新机遇。

虚拟机正逐渐成为开发人员首选软件分发和封装工具的标准容器。它可以提供一个直观的机制,支持将应用最佳实践的提升融入分发封装中,从而帮助用户显著简化开箱即用的体验,同时帮助开发人员简化支持问题。确保环境的完整性能够提高客户满意度并降低厂商的支持成本。

多年来 Oracle* 一直将 10g 产品作为虚拟机配置提供。

盘点现状,我们惊喜地发现:虚拟机软件的成本降低了,计算机的能力和速度提高了,诸如英特尔®虚拟化技术等硬件大幅提高了虚拟机的速度和能力,涌现了虚拟 SMP(多台 CPU 虚拟机)等多项技术进步,用于将物理机转换为虚拟机或复制虚拟机的工具也已面世。

最后,虚拟机软件还可增加安全性。每个虚拟机均相互独立,一台虚拟机中的冲突或病毒威胁不会影响到其它机器。


虚拟环境是什么样的?

运行于物理计算机上的操作系统能够控制计算机的硬件,并且只有一套操作系统能够随时控制硬件。虚拟机绕过“一台计算机,一套操作系统”的限制,使用称为虚拟机监视器(VMM)的软件来向虚拟机分配物理硬件资源。VMM 向每台虚拟机的模拟硬件分配资源。

 A Roadmap Overview and Update
by Rich Uhlig, Senior Principal Engineer, Corporate Technology GroupSession IVTS001 ©2006 Intel Corporation


在软件开发中使用虚拟机

软件开发人员可通过以下方式使用虚拟化技术,从而获得巨大优势:

  • 沙盒功能(Sandboxing)——虚拟机可以设置为特定配置,以确保环境完整性,从而支持开发和测试。
  • 灾难恢复与高可用性——崩溃的系统可以通过虚拟图像迅速进行恢复。备用虚拟机不占用驱动器空间之外的其它资源,需要时可立即启动。
  • 证据分析(Forensic Analysis)——虚拟图像的屏幕快照可支持迅速恢复,并对试用版测试中的不稳定表现进行调查。


节约资源

通过仅允许一台机器在各种环境和平台中开发和/或测试软件,可节省大量物理资源。这其中不仅包括初期的采购成本,还有计算机使用的空间、能量消耗和维护资源等。

在复杂的项目中,环境可移植性通常是一个问题,需要在许多平台上进行试验。在复制特殊环境时,使用虚拟机可节省时间。您可创建一个预先载入特殊软件集的虚拟硬盘库,以支持开发和测试团队克隆磁盘并迅速复制特殊环境。一旦发生冲突,复制开发环境的工作即被省去。


安全性

在虚拟机中部署开发环境可轻松实现企业的安全性和各项标准。您可以与其他同事共享虚拟机图像,或利用该图像在家中的个人电脑上创建虚拟机(以“沙盒功能”方式),从而将其与您的个人计算机隔离开,以满足企业安全性的要求。

用于软件开发和测试的新工具可安装在虚拟机上,不会危害到主要设置。在全新的虚拟机上创建标准环境的副本并安装新工具,即可了解其在不损害原有结构的情况下的执行情况。


高效性

由于所有软件都保存在虚拟容器中,因此它们可以轻松地从虚拟测试环境移动到生产环境中,从而减轻了从开发到质量检验再到生产环境的移植工作。

使用基于虚拟机的开发工作可支持开发和质检团队使用 VMware Lab Manager 等产品,这一虚拟实验室自动化(VLA)系统可提供虚拟机自助式管理能力。使用 VLA 可减少构建、维护和重新构建虚拟机环境所花费的时间,从而提高工作效率。


局限性

虚拟机可与其它正在运行的虚拟机共享物理资源并消耗一些处理成本。由于它们总是争取到虚拟机不应用于性能的资源,因此增加了旨在运行与非虚拟机平台的测试应用的压力。


在软件开发中使用虚拟机

销售虚拟机图像等预先配置的系统(其中操作系统和所有必要软件全部到位)可确保正确的配置,从而减少支持工作。

由于客户和开发人员的平台存在细微差别,因此通常很难重现问题的情况,但是虚拟机支持下载和检查虚拟机的图像,这就降低了支持成本。


虚拟机平台支持的应用

如果可能,最好能够避免数据库服务器等高性能和高要求的应用,因为运行这些应用需要将费用保持在最低程度,同时机器利用率也会接近饱和。最适合使用虚拟机作为最佳解决方案的领域包括 Web 服务器、DNS 服务器、应用服务器、电子邮件服务器、以及大多数时间空闲或甚少使用的任何网络应用。


支持虚拟机、IVT 优化型软件的优势

专为支持虚拟机并面向 IVT 优化所开发和配置的软件将在虚拟机平台上运行得更出色、更快速。通过避免使用为 VMM 增加负担的技术,充分利用 IVT 的各项属性,这种软件将为注重性能的客户提供更多竞争优势。


为虚拟机开发并配置应用

虚拟机软件开发的首要规则是:

  • 虚拟机是内存密集型设备
  • 虚拟机是 I/O 密集型设备——当然是指在物理环境中。VMM 必须创建和管理虚拟设备。
  • 流程的创建与破坏——虚拟系统的成本非常高昂,VMM 可处理关于创建和需要进行破坏后清理的 VMM 记录。
  • 其它虚拟机都在等待 CPU 时间。应该避免使用过多占用 CPU 的技术。


错误用法:

  • 内存映射 I/O 和设备映射 I/O:当进行 I/O 调用时,VMM 必须执行大量记录工作,以确保高速缓存、内存和磁盘保持同步。如果系统上的虚拟机数量众多,那么每次当虚拟机秩序混乱时,VMM 都必须在进行其它记录的同时保存内存状态,并在虚拟机恢复正常后将一切复原。因此,如果应用要依靠这种调用才能工作,那么性能将有所下降。
  • 创建和破坏流程:由于操作系统拥有平台,所以创建流程当然在本地环境中进行。在虚拟环境中,VMM 需要跟踪虚拟机的流程,除此之外,还要跟踪运行在平台上的其它虚拟机。每次创建流程时,都需要创建内存中的页面,与此流程相关的寄存器需要进行虚拟化,从而也增加了成本。同样,当流程被破坏时,VMM 也需要运行清除程序。
  • 大量占用 CPU 周期的循环:当本地环境中有一个完整的 (1) 循环时,对应用的影响很小,甚至没有影响。但这是由于整个平台都来支持这个应用。在虚拟环境中,每台虚拟机都在争夺资源,尤其是与其它虚拟机争夺 CPU 资源。因此,虚拟机中的完整 (1) 循环会误导 VMM——在实际并不需要的时候为虚拟机提供整个 CPU 资源。


正确用法:

  • 内存和设备:如果可能,应避免过度使用内存映像功能。
  • 使用流程池,提高效率并保持在客户机操作系统中。流程池将改进程序员端的记录工作,但是这意味着从环境切换中更快地恢复过来。
  • 在可能的时候,将机器占用的 CPU 周期返回。使用计时器和信号技术能够释放虚拟机的 CPU 周期。


应用问题的类型和减少问题的方法

输入和输出密集型应用

众多数据 I/O 将加剧 VMM 中的竞争。尝试最大限度减少数据 I/O。

网络密集型应用

使用多台虚拟机并将流量分配到多个映射到虚拟 NIC(网络接口卡)的物理 NIC 上。

磁盘密集型应用

如果应用为磁盘密集型,则应该确保虚拟机停留于存储域网络(SAN)中。数据驱动器应该保存在 SAN 上的另一个逻辑单元号(LUN——用于单独磁盘驱动器的地址)上,而不是操作系统驱动器上。如果在本地平台上运行时性能出现巨大差异,那么 SAN 上的 LUN 可以转换为原始磁盘[1]。这样将可以最大限度提升 IO 性能。

如果多台虚拟机拥有相同的 IO 属性,那么应该确保它们分布在 SAN 上不同的 LUN 上。


面向英特尔®虚拟化技术而优化

使用 IVT 有助于降低纯软件 VMM 的成本,从而解决上述问题。

使用 IVT 将:

降低 VMM 复杂性

  • 从设计上根除“虚拟化漏洞”
  • 减少 VMM 中的具体设备知识需求
  • 增强可靠性和保护能力
  • 提供对设备 DMA 和中断的新的控制能力
  • 提升功能性
  • 为传统(未改动的)客户机操作系统提供支持
  • 支持 pass-through 访问 I/O 设备(如适用)
  • 提高性能
  • 取消到 VMM 的不必要的移植
  • 新的地址转换机制(面向 CPU 和各种设备)
  • 减少内存需求(转换后的编码、阴影表格(shadow table))


英特尔为 ISV 提供英特尔®虚拟化技术(英特尔® VT)开发工具

架构与硬件

30 多年来,英特尔一直是芯片技术的领先企业,一如既往地引导逻辑处理器追求更高的处理器速度和性能,并致力于研究全新的材料和架构。英特尔的创新历史随着支持全新机遇的全新平台而延续。其中的最新成果——英特尔®虚拟化技术(英特尔® VT)提供了对英特尔®服务器和客户机平台的一系列硬件增强特性,可以显著改进虚拟化解决方案。英特尔®虚拟化技术包括:

VT-x——面向 IA-32 架构的英特尔® 虚拟化技术,为 IA-32 增加了两种全新的 CPU 操作模式:VMX 根操作和 VMX 非根操作。VMX 根操作设计用于 VMM,其运行方式非常类似于没有 VT-x 的 IA-32。VMX 非根操作可提供 VMM 控制,且用于支持虚拟机的其它 IA-32 环境。这两种操作模式均可支持全部 4 种优先级水平,支持客户机的软件以适当的优先级水平运行,并为 VMM 提供运行多种优先级水平的灵活性。通过 VT-x,客户机软件与 VMM 之间的每次转换都可改变线性地址空间,从而支持客户机软件充分利用其自己的地址空间。VMX 转换由 VMCS 进行管理,它驻留在物理地址空间中,而且不是线性地址空间中。

VT-i——面向安腾架构的英特尔® 虚拟化技术,可为 VMM 提供客户机软件不能使用的虚拟地址位。VMM 可通过将客户呼叫截取到 PAL 流程(报告所部署的虚拟地址位数量),从而隐藏对该地址位的硬件支持。这样,客户机将不会期望使用这一最高位,硬件也不允许它使用,从而为 VMM 提供单独使用一半虚拟地址空间的“特权”。通过 VT-i,VMM 可以使用虚拟处理器描述符(VPD)中的虚拟化加速字段来表示客户机软件能够读取或写入中断控制寄存器,而且不需要调用每次存取的 VMM。VMM 可在发出任何虚拟中断前建立这些寄存器的数值,并且可以在客户机中断处理程序返回前对其进行修改。

VT-d——面向 Directed I/O 的英特尔® 虚拟化技术,是接下来通往英特尔平台虚拟化全面硬件支持的关键一步。VT-d 扩展了英特尔® 虚拟化技术的发展蓝图,从目前支持 IA-32(VT-x)和英特尔® 安腾® 处理器(VT-i)虚拟化,到对 I/O 设备虚拟化的全新支持。VT-d 能够满足使用虚拟机(VM)技术的两项主要要求。首先,保护从虚拟机访问 I/O 资源不会干扰同一平台上另一台虚拟机的正常运行。VM 之间的相互隔离是实现可用性、可靠性和相互信任的基础。其次,虚拟平台必须提供在多台虚拟机之间共享 I/O 资源的能力。为每台虚拟机复制存储或网络控制器等 I/O 资源既不实用也不经济高效。VT-d I/O 设备虚拟化可以解决上述问题。

软件

英特尔与开发人员和各大高校密切合作,帮助他们创建能够更快更好地在英特尔®多核平台上运行的个人和商业软件。根据目前的需求和趋势,英特尔相信处理器和平台架构将需要演进到包含大量内核、丰富的内建处理能力、大容量片上内存子系统和先进微内核的虚拟化、可重配置的芯片级多处理(CMP)架构。

支持与培训

英特尔软件网络论坛  为您提供了一个交流平台,在这里您可以提出有关英特尔软件开发产品、英特尔®平台和技术以及其它主题的问题以寻求解答。英特尔的工程师会参与论坛的讨论并提供答案。另一种选择是英特尔® 软件网络支持中心 。英特尔平台上的虚拟化解决方案软件厂商包括微软*、VMware* 和 XenSource*,我们共同来维护社区网站,提供讨论空间和其它信息。

英特尔可为您提供以下培训资源: 

英特尔®学习网络,能提供关于众多技术的网络培训和在线研讨会。

英特尔点播式网络广播,支持以点播的方式访问近期英特尔网络广播演示。

英特尔® 软件学院主要面向软件开发受众,可提供形式灵活的课程,包括教师指导、在线学习以及定制培训。

研究与开发

目前,英特尔和 VMware 与领先的软件厂商合作,共同提供大量业经验证的虚拟化软件解决方案堆栈资源,以支持在VMware Infrastructure 3* 和采用英特尔®酷睿™ 微体系结构、基于英特尔®至强®处理器的服务器平台上开展的部署工作。


总结

虚拟化不仅是 IT 规划和部署中的重要因素,它还可惠及应用开发流程。通过使用虚拟机减少非生产时间、赢得更安全的环境和灵活的新客户接触方式,从而使得 ISV 能够提升软件工程师的工作效率。

客户总是寻求能够充分利用虚拟机优势,而不是阻碍其使用的软件。通过使用上述技巧并利用英特尔®虚拟化技术基于硬件的优势,ISV 将可在逐渐实现虚拟化的 IT 环境中拥有相当出众的优势。


资源 

英特尔 Virtualize ASAP 计划——该计划由英特尔、VMware和其他领先软件厂商共同开展,旨在共享专业知识、最佳实践信息、实施指南和参考配置,共同开发可在使用 VMware® Infrastructure 3© 和基于英特尔®至强®处理器的服务器平台的虚拟化环境中运行的应用。

英特尔®虚拟化技术——英特尔®虚拟化技术(英特尔® VT)是针对英特尔®架构服务器和客户机平台的一系列硬件增强特性,可显著改进传统的软件虚拟化解决方案。

Microsoft Virtual PC 2007——不管您是在现有基础设施中已经采用 Microsoft 虚拟化技术,还是只是虚拟电脑的发烧友,现在您都可以完全免费地下载 Virtual PC 2007。

软件开发人员常见问题:英特尔®虚拟化技术——关于英特尔®虚拟化技术的常见问题解答, 包括开发人员论坛,用于讨论和了解关于上述处理器的信息。

InnoTek 提供的VirtualBox是运行在 Windows 和 32 位 Linux 主机上的 x86 虚拟化产品家族,完全支持英特尔的硬件虚拟化 VT-x。VirtualBox 可支持多种客户机操作系统,包括 Windows(NT 4.0、2000、XP、Server 2003、Vista)、DOS/Windows 3.x、Linux(2.4 和 2.6)和 OpenBSD,已于 2007 年 1 月经 GNU 通用公共许可证(GPL)许可发行。它可经远程台式机协议(RDP)远程运行虚拟机,以及通过 iSCSI 和 USB 支持远程设备。

vmdev.net提供了一系列与 VMware 的世界级研发机构直接合作的高级虚拟化开发项目。

VMTN——VMware 技术网络网站是一个在线开发资源中心,包括一些品牌厂商的预构建虚拟机集合,如 Oracle、BEA、Red Hat、Novell 和 MySQL。VMTN 还发布了许多技术内容,包括文章、操作培训资料、白皮书及 VMware 产品技术文档。

VMware 虚拟化开发中心是 VMware 通过社区源代码计划访问高级虚拟化开发的门户网站。该中心面向有兴趣借助 VMware 的虚拟化软件开发产品的所有软硬件厂商。成员注册后可以访问源代码、文档和其它资源。

VMWare是工业标准系统虚拟化基础设施软件领域的一家全球领先厂商, 世界上使用同时也是世界上采用 VMware 解决方案(Virtual Lab AutomationVMware ESX)的所有厂商中规模最大的一家。这些企业可借助 VMware 解决方案简化其 IT 业务、充分利用其现有计算投资并快速响应不断变化的业务需求。

Xen是剑桥大学开发的一款开放源代码虚拟机监视器(VMM),能够支持经过修改的操作系统在显示器顶部运行。英特尔运用英特尔® 虚拟化技术扩展了 Xen VMM 的功能,支持其能够运行未经修改的客户机操作系统。目前,32 位英特尔® 架构处理器和安腾® 架构处理器可完成上述操作。


相关读物 

如何将英特尔®虚拟化技术集成到安腾架构简介中

如何通过 VT-x 和 VT-i 解决虚拟化挑战

英特尔®虚拟化技术相关文章

面向定向 I/O 的英特尔®虚拟化技术

嵌入式和通信基础设施应用中的英特尔®虚拟化技术

英特尔®虚拟化技术:高效处理器虚拟化的硬件支持

应用英特尔®虚拟化技术,实现新的客户机虚拟化使用模式


商标信息

须经英特尔许可,方可公开使用英特尔的商标。在有关英特尔产品的广告宣传和促销活动中对英特尔商标的适当使用需随附适当的法律声明。


作者简介

Thomas Wolfgang Burger 是 Thomas Wolfgang Burger 咨询公司的所有人。自 1978 年以来,他先后从事过多项工作,如顾问、讲师、作家、分析师及应用开发人员。如欲与他联系,请发送电子邮件至:twburger@gmail.com.

 


 

[1]原始磁盘映像(RDM)会将整个 SAN LUN 暴露在虚拟机面前,而不是让 VMware ESX 管理程序创建 VMFS (VMware 公司的 SAN 文件系统)卷。这样(通常)会改进性能并允许使用 SAN 专用工具,从而使得虚拟机能够与 SAN 直接交互。

  • Développeurs
  • Virtualisation
  • URL

  • Reaching Technology From Blogs Show 1

    Down to Business 6

    Down to Business 7

    $
    0
    0
    Anglais
    Down to Business 7

    Ylian Saint-Hilaire provides an introduction to the Manageability Developer Toolkit (AMT Commander)  - Mesh Edition.  This edition of the Manageability Developer’s Toolkit has internationalization support, automatic updates and cloud support (you can now remotely manage your Mesh devices from the cloud.)

    http://opentools.homeip.net/open-manageability

  • News
  • Développeurs
  • Client d’entreprise
  • Technologie d’administration active Intel®
  • Virtualisation
  • AMT
  • Active Management Technology
  • AMT Commander
  • Ylian St. Hilaire
  • virtualization
  • Down to Business 8

    $
    0
    0
    Anglais
    Down to Business 8

    Ylian Saint-Hilaire demonstrates the Intel System Defense Utility (Intel SDU) which is available on all Intel Executive Series Motherboard supporting Intel AMT.  Intel SDU is an entry-level business management tool which also supports your “Mesh” network,  Remote Management, Asset Management and Event Logs.

    http://www.intel.com/design/motherbd/software/isdu/index.htm

  • News
  • Développeurs
  • Client d’entreprise
  • Technologie d’administration active Intel®
  • Sécurité
  • Virtualisation
  • Intel System Defense Utility
  • ISDU
  • System Defense Utility
  • Intel AMT
  • Intel Active Management Technology
  • AMT Commander
  • Intel感知计算(1)- 简介

    $
    0
    0

    Intel感知计算(1- 简介

    Intel感知计算通过设备感知和理解用户行为来进行人机交互,它是更加自然的、身临其境的、直觉交互方式。现在Intel感知计算SDK Beta版本已经能够使用,网友可以访问http://software.intel.com/en-us/vcsource/tools/perceptual-computing-sdk来下载安装文件,如图1所示:

    1

    先在图1右侧下来框中选择Perceptual Computing,然后在点击Download按钮。

     

    Intel感知计算支持多种使用模式,比如说:

    l  语言认知

       

             图2

    l  人脸识别

      

                          图3

    l  近距离跟踪,比如说手指近距离跟踪等

      

                         图4

    l  2D/3D对象跟踪

      

                        图5  

    英语好的网友,不妨看一下David Perlmutter是如何来看Intel感觉计算的J

    视频:http://v.youku.com/v_show/id_XNDgxNTQ0NjQw.html

     

     

  • 感知计算 传感器 摄像头 面部识别 自然姿势 手势控制 2D/3D对象 blog challenge
  • Image de l’icône: 

  • Courseware
  • Technical Article
  • Tutorial
  • Reaching Technology From Blogs 6

    $
    0
    0
    Anglais
    Reaching Technology From Blogs 6

    Ylian Saint-Hilaire has moved his Meshcentral software over to the latest OS and Development environments.  In his blog:  “Moving over to the latest OS Developer Environment” find out about some of the issues Ylian ran into. Moving over to the latest OS & developer environment

    http://software.intel.com/en-us/blogs/2012/11/11/moving-over-the-the-latest-os-developer-environment

     Ylian’s Blogs:  http://software.intel.com/en-us/blogs/author/337009

  • News
  • Développeurs
  • Client d’entreprise
  • Virtualisation
  • Ylian St. Hilaire
  • MeshCentral
  • blogging
  • Reaching Technology From Blogs 7

    $
    0
    0
    Anglais
    Reaching Technology From Blogs 7

    Ylian has written a blog about Intel AMT Setup and Configuration using TLS-PSK and TLS-PKI.    Ylian stumbled across an interesting issue while updating the provisioning functionality of the OpenDTK tool.  What Ylian found was that for developers who are building their own Intel AMT Activation software, they will be required to use a non-standard TLS stack (the .NET TLS stack does not work.)  Watch RTFB 7 and learn more about what Intel AMT developers must know about writing software to enable Intel AMT systems with TLS.

    TLS:  Transport Layer Security

    PSK:  Pre-shared Key

    DTK:  Manageability Developers Tool Kit

    Ylian’s Blog:  OpenDTK - Intel AMT activation, what developers must know.

    View more of Ylian's blogs here.

  • News
  • Technical Article
  • Développeurs
  • Client d’entreprise
  • Technologie d’administration active Intel®
  • Virtualisation
  • YouTube
  • Ylian St. Hilaire
  • tls
  • AMT
  • Intel AMT
  • OpenDTK Took
  • RTFB
  • dtk

  • GDB - The GNU* Project Debugger for Intel® Architecture

    $
    0
    0

    Introduction

    The Intel® System Studio contains build of GDB, the GNU* Project Debugger that has been tested against the cross-build requirements of developing and debugging applications targeting Embedded Devices and Intelligent Systems. In addition the GDB provided by Intel offers additional features to identify and fix data races during debug. On the Intel® Atom™ processor it also gives access to application level Last-Branch-Record (LBR) instruction trace. A very powerful tool to supplement callstack backtrace and unwind the instruction flow leading up to an error condition, even under most challenging circumstances.

    In this article we will focus on the use of GDB for embedded use cases and the strength of the unique Intel® System Studio feature set of GDB.

    Table of Contents

    Using GDB to debug application on embedded devices

    Please refer to GDB: The GNU* Project Debugger (http://www.gnu.org/software/gdb/) for details.
    For cross-development GDB comes with a remote debug agent called gdbserver. This debug agent can be installed on the debug target to launch a debuggee and attach to it remotely from the development host.
    This can be useful in situations where the program needs to be run on a target host that is different from the host used for development, particularly when the target has a limited amount of resources (either CPU and/or memory).
    To do so, start your program using gdbserver on the target machine. gdbserver then automatically suspends the execution of your program at its entry point, waiting for a debugger to connect to it. The following commands start an application and tells gdbserver to wait for a connection with the debugger on localhost port 2000. 


          $ gdbserver localhost:2000 program
         Process program created; pid = 5685
         Listening on port 2000


    Once gdbserver has started listening, we can tell the debugger to establish a connection with this gdbserver, and then start the same debugging session as if the program was being debugged on the same host, directly under the control of GDB.


          $ gdb program
         (gdb) target remote targethost:4444
         Remote debugging using targethost:4444
         0x00007f29936d0af0 in ?? () from /lib64/ld-linux-x86-64.so.
         (gdb) b foo.adb:3
         Breakpoint 1 at 0x401f0c: file foo.adb, line 3.
         (gdb) continue
         Continuing.
        
         Breakpoint 1, foo () at foo.adb:4
         4       end foo;


    It is also possible to use gdbserver to attach to an already running program, in which case the execution of that program is simply suspended until the connection between the debugger and gdbserver is established. The syntax would be


    $ gdbserver localhost:2000 --attach 5685

    to tell gdbserver to wait for GDB to attempt a debug connection to the running process with process ID 5685

     

    Using GDB to debug applications running inside a virtual machine

    Using GDB for remotely debugging an application running inside a virtual machine follows the same principle as remote debug using the gdbserver debug agent.

    The only additional step is to ensure TCP/IP communication forwarding from inside the virtual machine and making the ip address of the virtual machine along with the port used for debug communication visible to the network as a whole.

    Details on how to do this setup can be found on Wikibooks* (http://en.wikibooks.org/wiki/QEMU/Networking)

    The basic steps are as follows

    1. Install QEMU, the KQEMU accelerator and bridge-utils


    $ su -
    $ yum install qemu bridge-utils

    2. Creating the image for the guest OS

    For best performance, you should install your guest OS to an image file. To create one, type:


    $ qemu-img create filename size[ M | G ]


    where filename is going to be the name of your image, and size is the size of your image with the suffix 'M' (MB) or 'G' (GB) right after the number, no spaces.


    $ qemu-img create Linux.img 10G

    3. Configuring network for your guest OS

    Put the following contents into /etc/qemu-ifup:


    #!/bin/sh
    #
    # script to bring up the device in QEMU in bridged mode
    #
    # This script bridges eth0 and tap0. First take eth0 down, then bring it up with IP 0.0.0.0
    #
    /sbin/ifdown eth0
    /sbin/ifconfig eth0 0.0.0.0 up
    #
    # Bring up tap0 with IP 0.0.0.0, create bridge br0 and add interfaces eth0 and tap0
    #
    /sbin/ifconfig tap0 0.0.0.0 promisc up
    /usr/sbin/brctl addbr br0
    /usr/sbin/brctl addif br0 eth0
    /usr/sbin/brctl addif br0 tap0
    #
    # As we have only a single bridge and loops are not possible, turn spanning tree protocol off
    #
    /usr/sbin/brctl stp br0 off
    #
    # Bring up the bridge with IP 192.168.1.2 and add the default route
    #
    /sbin/ifconfig br0 192.168.1.2 up
    /sbin/route add default gw 192.168.1.1
    #stop firewalls
    /sbin/service firestarter stop
    /sbin/service iptables stop

    Please change the IP's to show your setup.

    Now, put this into /etc/qemu-ifdown:


    #!/bin/sh
    #
    # Script to bring down and delete bridge br0 when QEMU exits
    #
    # Bring down eth0 and br0
    #
    /sbin/ifdown eth0
    /sbin/ifdown br0
    /sbin/ifconfig br0 down
    #
    # Delete the bridge
    #
    /usr/sbin/brctl delbr br0
    #
    # bring up eth0 in "normal" mode
    #
    /sbin/ifup eth0
    #start firewalls again
    /sbin/service firestarter start
    /sbin/service iptables start


    Make the scripts executable so QEMU can use them:


    $ su -
    $ chmod +x /etc/qemu-if*
    $ exit


    4. Installing the guest OS

    Type the following to start the installation:


    $ su
    $ /sbin/modprobe tun
    $ qemu -boot d -hda image.img -localtime -net nic -net tap -m 192 -usb -soundhw sb16 -cdrom /dev/hdc;/etc/qemu-ifdown


    Where image.img was the name you gave to your image earlier. I'm also assuming /dev/cdrom is your CD drive - if it's not, then please change it to the correct device. After the install is complete, proceed to step 5.

    5. Making the run script & running at will

    The last step is to create the QEMU start script and from there on you can run your guest OS. Create this file - called qemustart - in the same directory as your image:


    #!/bin/sh
    su -c "/sbin/modprobe tun;qemu -boot c -hda image.img -localtime -net nic -net tap -m 192 -usb -soundhw sb16;/etc/qemu-ifdown"


    Where image.img was the name given to the image earlier.
    Last step - make the startup script executable:


    $ chmod +x /path/to/qemustart

     

    Debugging Data Race Conditions

    A data race occurs when multiple threads access overlapping memory without synchronization. Although data races may be harmless or even part of the design in some cases, a data
    race typically indicates a bug. GDB may be used as a front-end for the parallel debug extension (PDBX) data race detector that is part of the Intel compiler. The PDBX data race detector consists of compiler instrumentation and run-time support library. Both are provided by the Intel compiler. The PDBX run-time library provides a debugger interface for communicating detected data races as well as for configuring the analysis. The PDBX data race detector is enabled with the ‘-debug parallel’ compiler option.This option is available with the Intel® C++ Compiler starting from version 12.1 supporting GNU*/Linux* on IA-32 and Intel® 64 architectures.The data race detector can handle pthread and Intel OpenMP synchronization primitives. When debugging remotely, make sure that gdb finds the correct version of libpdbx that is used on the target. When using OpenMP, the following variables must be defined in the debuggee’s environment:

    INTEL LIBITTNOTIFY32=""
    INTEL LIBITTNOTIFY64=""
    INTEL ITTNOTIFY GROUPS=sync

    1. Enable, Disable, and Reset


    The PDBX data race detector logs all memory accesses and synchronization for all threads. It checks for data races on each memory access. As can be expected, this is very expensive,
    both in terms of performance and memory consumption. To mitigate this, you can fine-tune the data race analysis. While you can’t control logging of synchronization events, you can control logging of memory accesses. Memory accesses are logged so long as the data race detector is enabled, and because the data race detector always logs synchronization, it may be disabled and
    re-enabled at any time. Of course, the detector is only able to detect data races on known memory accesses. In addition to selective enabling, you can get the data race detector to discard the
    memory logs it has collected. This may be useful when debugging data races in separate parts of the program under memory constraints.


    The following commands enable, disable, and reset the data race analysis:


    pdbx

    Prints the data race detector status. It shows whether the data race detector is enabled, whether the run-time library was loaded, and the version being used.


    pdbx enable


    Enables memory access logging and data race detection. The data race detector logs memory accesses and reports detected data races to gdb.

    pdbx disable


    Disables memory access logging and data race detection. The data race detector stops logging memory accesses. Data races that are about to be reported are discarded.


    pdbx reset


    Discards memory access logs. Data races are only detected for new memory accesses. This reduces the memory consumption of the data race detector and may also improve performance.


    2. Filters


    You can configure the data race detector to ignore parts of the application that may be useful under a variety of different use cases, such as to:


    • Ignore false positives
    The data race detector may incorrectly report correctly synchronized accesses as data races. This is typically caused by a synchronization construct that is not known to the data race detector. It may also be caused by partially instrumented applications.


    • Ignore intended or harmless data races
    In some cases, data races may be harmless or even intended. Examples are data races where all threads are guaranteed to write either the same or an equivalent value. Accepting such a harmless data race is typically cheaper than synchronizing the threads.


    • Ignore currently irrelevant data races
    The data race detector may correctly report data races that are not related to the bug currently under observation. Such distracting data races may be ignored until a later time.


    • Improve data race analysis performance
    Ignoring parts of your program may have a significant impact on the analysis performance overhead. It is typically cheaper to repeatedly analyze a small portion of your program than it is to analyze the whole program.


    • Improve data race analysis memory consumption
    Ignoring parts of your program may have a significant impact on the analysis memory consumption. This allows data race detection to be used in scenarios where a whole program analysis would not be feasible.


    gdb uses filters expressed in source language terms to describe parts of the program that should be ignored/focused upon. Filters can be defined using the following commands:


    pdbx filter code addr
    pdbx filter code start - end


    This filter specifies either a single code address or a range of code addresses from start (inclusive) to end (exclusive).


    pdbx filter data addr
    pdbx filter data start - end


    This filter specifies either a single data address or a range of data addresses from start (inclusive) to end (exclusive).

    pdbx filter line expr


    This filter specifies the code generated for a single source line. The expr argument must be of a form accepted by info line and is interpreted in the current context.


    pdbx filter variable expr


    This filter specifies the data object corresponding to the source expression expr evaluated in the current context.


    pdbx filter reads


    This filter specifies all read accesses in the program. It is important to note that filters on stack objects are not automatically removed or disabled when the object is destroyed.


    3. Filter Sets


    Filters are organized in sets. A filter set defines the semantics of the filters it contains and can be set to either the ‘suppress’ or ‘focus’ type. The following commands alter the type of the current filter set:


    pdbx fset suppress


    Set the type of the current filter set to suppress. The filters in this filter set specify the parts of the program that should be ignored by the data race detector. Memory accesses made from or to filtered areas are not logged and do not participate in the data race analysis.


    pdbx fset focus


    Set the type of the current filter set to focus. The filters in this filter set specify the parts of the program that should be analyzed for data races. Memory accesses that are not made from or to filtered areas are not logged and do not participate in the data race analysis. Beware that an empty focus filter set ignores the entire program. When debugging data races, it may be useful to define different filter sets and switch between them during the course of the debug session. gdb provides the following commands manage filter sets:


    pdbx fset new name


    Create a new filter set with the given name. The name must start with a letter. If a filter set with that name already exists, it results in an error and no filter set is created. The current filter set is not changed in that case. The filter set is initially empty and of type suppress. The new filter set will automatically be selected. Use filter commands or pdbx fset import to populate the filter set.


    pdbx fset delete name


    Delete a filter set with the given name. If no filter set with that name exists, it results in an error. The current filter set can not be deleted.


    pdbx fset select name


    Select the filter set with the given name. If no filter set with that name exists, it results in an error. The current filter set is not changed in that case.

    pdbx fset list


    List all filter sets. For each filter set, the type, name, and the number of filters it contains is printed. When new filters are created, they are automatically added to the current filter set. In
    addition to this, gdb provides the following commands to deal with filters within filter sets. Each of these commands accepts an optional filter set name and an optional range argument. If both arguments are present, the name precedes the range and they are separated by a colon :. The range argument may be a single integer or two integers separated by a dash -. Examples are:


    pdbx fset cmd


    Target all filters in the current filter set.


    pdbx fset cmd num


    Target filter num in the current filter set.


    pdbx fset cmd start - end


    Target filters start (inclusive) through end (inclusive) in the current filter set.


    pdbx fset cmd name


    Target all filters in filter set name.


    pdbx fset cmd name: num


    Target filter num in filter set name.


    pdbx fset cmd name: start - end


    Target filters start (inclusive) through end (inclusive) in filter set name. If a name argument is given but no filter set with that name exists, the command terminates with an error message. If a range argument is given and some of the filter numbers are out of bounds, the command gives an error message and operates on the filter numbers that are inside the bounds of the filter set.


    The following commands are provided to manage filters within filter sets:


    pdbx fset show


    List filters in a filter set.

    By default, filters are printed as specified by the pdbx filter command. With the /r modifier, the addresses that are used for that filter are printed, instead.


    pdbx fset remove


    Remove filters from a filter set.


    pdbx fset enable


    Enable filters in a filter set.

    Filters that have already been enabled are not modified. Filters that have been disabled are enabled and evaluated in the current context. If evaluation fails, the respective filter is marked pending and will not contribute to the data race detector configuration.


    pdbx fset disable


    Disable filters in a filter set. Disabled filters do not contribute to the data race detector configuration.


    pdbx fset evaluate


    Evaluate filters in a filter set.

    Filters are evaluated in the current context. If evaluation fails, the respective filter is marked pending and does not contribute to the data race detector configuration.


    pdbx fset import


    Import filters into the current filter set. Imported filters are added to the end of the current filter set. They keep their state and are not re-evaluated. The same filter may be imported multiple times. Filters can not be imported from the current filter set.


    4. Race Detection History


    GDB keeps a history of data races reported by the PDBX data race detector. This list may be used for setting filters on a subsequent run. Except for pdbx history, all history commands accept an optional range argument. The range argument may be a single integer or two integers separated by a dash -. The range argument specifies the reports that the command operates upon. The pdbx history command does not take any arguments. The following commands operate on the data race history:


    pdbx history


    Prints a brief summary of the data race history showing the number of reported data races, the number of threads involved, and the number of read, write, and update accesses in that order.


    pdbx history remove


    Removes data race reports from the history.


    pdbx history list


    Prints a brief summary of data race reports in the history in the form of a list. The list includes the number of different threads involved in the data race, as well as the number of read, write and update accesses, one report per line.


    pdbx history show


    Print a detailed description of data race reports in the history. The detailed description prints each memory access involved in the data race, one access per line. By default, gdb tries to map data addresses back to variables and code addresses back to source lines. With the /r modifier, raw addresses are printed, instead.

    Branch Instruction Tracing

    Intel® Architecture on the Intel® Atom™ processor offers a feature called Branch Trace Store (BTS) that stores a log of branches into an OS-provided ring buffer. The
    GNU/Linux operating system supports this feature since version 2.6.32 as part of the perf_event interface.

    The gdb extension for branch tracing is based on the hardware BTS feature making it very useful to debug problems that do not immediately result in a crash. It is particularly useful for bugs that make other debugger features fail, for example, a corrupted stack that breaks unwinding. You can use the gdb branch tracing commands to record program control flow and view the recorded branch trace as a

    • list of blocks of sequential execution (list view)
    • disassembly of one of the listed blocks


    Branch tracing is less powerful when compared to reverse debugging,  but it is considerably faster. In addition, the list view provides a quick overview of where you are, and is therefore comparable with the backtrace command.

    1. Branch Tracing Commands

    Enable and Disable Branch Tracing


    To enable/disable branch tracing use the btrace enable/disable commands.


    btrace enable/disable


    This command starts/stops recording branch trace information for program threads. It is available in three flavors.


    btrace enable/disable all


    Starts/stops recording the branch trace for all threads.


    btrace enable/disable auto


    Automatically enables/disables recording branch trace for all new threads. Branch tracing induces significantly less overhead than full recording, yet the overhead is noticeable for longer-running applications. Unless you feel that the overhead is disturbing, you could simply turn on automatic enabling and forget about the feature until you need it.


    btrace enable/disable [<begin>[-<end>]]


    Starts/stops recording branch trace for a specified range of threads. If no argument is provided, the command applies to the selected thread.

    2. List Traced Blocks


    To list the traced blocks use the btrace list command.


    btrace list [<begin>[-<end>]]
    btrace list /a
    btrace list /f
    btrace list /l
    btrace list /t


    This command prints the blocks that have been traced, one line per block. It accepts an optional range argument, specifying the range of blocks to be listed. If no argument is given, all blocks are listed. The output can be configured using the modifiers, /a, /f, /l, where the default is /fl. The command prints:

    <nn> the block number
    /a the begin and end code address of that block
    /f the function containing the block
    /l the source lines contained in the block

    Blocks are ordered from newest to oldest: block 1 always contains the current location.


    (gdb) btrace list 24-34
    24 in stdio_file_flush () at ../../../git/gdb/ui-file.c:525-529
    25 in ui_file_data () at ../../../git/gdb/ui-file.c:175-180
    26 in stdio_file_flush () at ../../../git/gdb/ui-file.c:522-523
    27 in gdb_flush () at ../../../git/gdb/ui-file.c:185
    28 in gdb_wait_for_event () at ../../../git/gdb/event-loop.c:840-847
    29 in gdb_do_one_event () at ../../../git/gdb/event-loop.c:461
    30 in gdb_do_one_event () at ../../../git/gdb/event-loop.c:453
    31 in process_event () at ../../../git/gdb/event-loop.c:407
    32 in process_event () at ../../../git/gdb/event-loop.c:361-367
    33 in process_event () at ../../../git/gdb/event-loop.c:1041-1043
    34 in process_event () at ../../../git/gdb/event-loop.c:1041-1045


    3. Print Branch Trace Disassembly


    To print branch trace disassembly use the btrace command.


    btrace [+, -, <begin>[-<end>]]
    btrace /m
    btrace /r

    Prints branch trace disassembly, block by block. The btrace command accepts an optional range argument specifying the range of blocks to be printed. If more than one block is specified, the blocks are printed in reverse order to preserve the original control flow. Repeated commands iterate over all blocks similar to the gdb list command. The btrace command supports the /m and /r modifiers accepted by the gdb disassemble command. The /m modifier is used to interleave source information.


    (gdb) btrace /m 25
    ../../../git/gdb/ui-file.c:175{
    0x0000000000635410 <ui_file_data+0>: sub $0x8,%rsp
    Chapter 15: Branch Tracing 171
    ../../../git/gdb/ui-file.c:176 if (file->magic != &ui_file_magic)
    0x0000000000635414 <ui_file_data+4>: cmpq $0xb33b94,(%rdi)
    0x000000000063541b <ui_file_data+11>: jne 0x635426 <ui_file_data+22>
    ../../../git/gdb/ui-file.c:177 internal_error (__FILE__, __LINE__,
    0x000000000063541d <ui_file_data+13>: mov 0x50(%rdi),%rax
    ../../../git/gdb/ui-file.c:178 _("ui_file_data: bad magic number"));
    ../../../git/gdb/ui-file.c:179 return file->to_data;
    ../../../git/gdb/ui-file.c:180}
    0x0000000000635421 <ui_file_data+17>: add $0x8,%rsp
    0x0000000000635425 <ui_file_data+21>: retq


    Note that using the mixed source and disassembly modifier does not work very well for inlined functions, a problem that the btrace command shares with the gdb disassemble command.

    Example:

    Program crashed and back trace is not much help: 
     

    (gdb) run  
    Starting program: ../gdb/trace/examples/function_pointer/stack64    Program received signal SIGSEGV, Segmentation fault. 0x000000000000002a in ?? ()
    (gdb) bt
    #0  0x000000000000002a in ?? ()
    #1  0x0000000000000017 in ?? ()
    #2  0x000000000040050e in fun_B (arg=0x4005be) at src/stack.c:32
    #3  0x0000000000000000 in ?? ()

    Look at the branch trace.
    List of blocks starts from the most recent block (ending at the current pc) and continues towards older blocks such that control flows from block n+1 to block n.

    (gdb) btrace list 1-7
    in ?? ()
    in fun_B () at src/stack.c:36-37
    in fun_B () at src/stack.c:32-34
    in main () at src/stack.c:57
    in fun_A () at src/stack.c:22-25
    in fun_A () at src/stack.c:18-20
    in main () at src/stack.c:51-56

    from main(), we called first fun_A() and then fun_B(). The call to fun_A() returned, and we crashed somewhere in fun_B().

    Look at the disassembly of the last 3 blocks in original control flow (i.e. reverse trace) order, starting from the call to fun_B() from main().

    /m interleaves source info

    (gdb) btrace /m 1-3
    src/stack.c:32  static long fun_B(void* arg) {  
     0x000000000040050e <fun_B+1>:        mov    %rsp,%rbp   
     0x0000000000400511 <fun_B+4>:        mov    %rdi,-0x18(%rbp)  
    src/stack.c:33      struct B_arg* myarg = arg;   
     0x0000000000400515 <fun_B+8>:        mov    -0x18(%rbp),%rax  
     0x0000000000400519 <fun_B+12>:       mov    %rax,-0x8(%rbp)  
    src/stack.c:34      if (!myarg) return -1;   
     0x000000000040051d <fun_B+16>:       cmpq   $0x0,-0x8(%rbp)  
     0x0000000000400522 <fun_B+21>:       jne    0x40052d <fun_B+32>  
    src/stack.c:36      return myarg->arg1 + myarg->arg2;  
     0x000000000040052d <fun_B+32>:       mov    -0x8(%rbp),%rax  
     0x0000000000400531 <fun_B+36>:       mov    (%rax),%rdx  
     0x0000000000400534 <fun_B+39>:       mov    -0x8(%rbp),%rax  
     0x0000000000400538 <fun_B+43>:       mov    0x8(%rax),%rax  
     0x000000000040053c <fun_B+47>:       lea    (%rdx,%rax,1),%rax  
    src/stack.c:37  }  
     0x0000000000400540 <fun_B+51>:       leaveq   
     0x0000000000400541 <fun_B+52>:       retq 
    0x000000000000002a:  Cannot access memory at address 0x2a

    fun_B() is executed and returns to an invalid address suggesting a corrupted stack. fun_B() leaves but that there was no corresponding push on entry to fun_B()
    => the function pointer comp that was called in main() had been corrupted.

    Integrate GDB into Eclipse* CDT

    To use the provided GNU* Project Debugger GDB instead of the default GDB debugger provided with the default GNU* tools installation of your distribution, please source the following debugger environment setup script:


    <install-dir>/system_studio_2013.0.xxx/debugger/gdb/bin/debuggervars.sh

    Remote debugging with GDB using the Eclipse* IDE requires installation of the C/C++ Development Toolkit (CDT)  (http://www.eclipse.org/downloads/packages/eclipse-ide-cc-linux-developers-includes-incubating-components/indigosr2) as well as Remote System Explorer (RSE) plugins (http://download.eclipse.org/tm/downloads/). In addition RSE has to be configured from within Eclipse* to establish connection with the target hardware.

    1. Copy the gdbserver provided by the product installation


    <install-dir>/system_studio_2013.0.xxx/debugger/gdb/<arch>/<python>/bin/


    to the target system and add it to the execution PATH environment variable on the target.

    2. Configure Eclipse* to point to the correct GDB installation:

    a. Inside the Eclipse* IDE click on Window>Preferences from the pulldown menu.

    b. Once the preferences dialogue appears select C++>Debug>GDB from the treeview on the left.

    c. The GDB executable can be chosen by editing the “GDB debugger” text box. Point to

    <install-dir>/system_studio_2013.0.xxx/debugger/gdb/<arch>/<python>/bin/,

    where <arch> is ia32 or intel64 and <python> is py24, py26, or py27, depending on architecture and Python* installation

    Summary

    Intel provides extra capabilities to GDB, the GNU* Project Debugger, targeted at strengthening it's ability to find and resolve vexing runtime issues of code running on Intel® architecture based devices fast. The application specific branch trace supplements callstack backtrace and reverse execution by providing a fast and reliable method of unwinding past execution flow and pinning down root causes for segmentation faults and issues corrupting the callstack. PDBX based data race detection provides the ability to pin down root causes for concurrency introduced runtime bugs in your code as part of your default GDB based debug methodology. Every effort is made to ensure that this GDB integrated with the Intel® System Studio supports embedded cross-debug requirements for target OS's like Yocto Project* and Wind River* Linux* whether they are running on a remote small form-factor target device or inside a virtual machine.

     

  • gdb
  • GNU
  • Développeurs
  • Linux*
  • MeeGo*
  • Moblin*
  • Yocto Project
  • C/C++
  • Intermédiaire
  • Débogueurs
  • Intel® System Studio
  • Débogage
  • Intégré
  • Processeurs Intel® Atom™
  • Processeurs Intel® Core™
  • Virtualisation
  • Fichiers joints protégés: 

    Fichier attachéTaille
    Téléchargementgdb.pdf3.08 Mo
  • URL
  • Learning Lab
  • TouchDesigner / Interview with Jarrett Smith and Ben Voigt

    $
    0
    0

    This is the second blog I have written in which TouchDesigner is mentioned, but this time I have an informative interview with Jarrett Smith, system architect of TouchDesigner and Ben Voigt, product manager of TouchDesigner included. TouchDesigner is a very exciting and unique program. I have a hard time concisely explaining what TouchDesigner is as it has so many uses and  applications. It has a node based interface and is also open to programming. Python will be added to it as a language that can be used.  I will call it a platform from which you can according to the website create, interactive art production, architectural and environmental projections, pre-visualization, live character puppeteering, prototype environments, projection mapping, real time special effects, and VJ and in-studio performances.

    My uses of it mostly falls within the VJ performance category.  I also create and render out images from it much as you would from any 3d program such as Autodesk Maya. I find it easier to do certain types of complex modeling in Maya so I import the model and material into TouchDesigner.  I should be able to import animation as well but have not yet succeeded in that. Of course as in most VJ programs I can import in movies and images and do compositing and other operations on them real time.  

    I am excited about using TouchDesigner in the next immersive environment dome show I create. The mapping capabilities of TouchDesigner will enable me to map the dome and any stage sets I am using. Ideally I will be able to perform live visuals, video playback  and do real time animation which can also be controlled by channels in my audio. TouchDesigner has a module which will enable me to hook up to Kinect hardware so I can have performers interacting with the visuals and music. I wish I had done this for the "Blue Apple" dance performance in the dome I created the visuals for several months ago. There is also a Photoshop module in TouchDesigner I would like to try out. Audience members will also be able to interact with the visuals using controls on the the iPad, iTouch or iPhone using the TouchDesigner OSCemote app.  My hope is that in the future more built out modules for the most commonly used functions will be included in the TouchDesigner interface.  Doing this can will only increase the number of people who use the platform.

     Questions (answered by Jarrett Smith and Ben Voigt)

     What was the initial inspiration that led to the development of TouchDesigner?

     Our passion for music electronic music and performing live visuals alongside musicians and DJs was the inspiration. At the time, there we not many tools and we were hacking Houdini to become as realtime as we could make it. We saw the need for more specialized tools for live performance and decided to develop new tools to focus on realtime animation.

    Was it a labor of love?

     Yes, the love for music, visuals, and technology.

    Do you feel TouchDesigner occupies a unique spot in the market place?

     Yes, there are not many software packages that target live visual performances and the custom installation market the way TouchDesigner does. When you consider the level of customization TouchDesigner allows and the unique visual interface that constantly communicates with the user, that group is even smaller.

    What were the initial goals for the development of TouchDesigner and have they changed?

     As technology has changed over the last 10 years, so too have TouchDesigner's goals. The initial goal of making visuals performed to music is now just one part of the bigger picture. Video playback has become a much more important cornerstone of TouchDesigner technology over the past 6 years. As well, the more recent rise of projection mapping and full show control/management have defined our development over the past few years.

    What do you feel is important in interface design and what inspired the design of the TouchDesigner interface?

     When redesigning TouchDesigner's UI for 077, form really followed function. As a procedural node-based program, we realized the most time consuming thing when working was trying to identify where in the project something was taking place. How could you easily trace back where a certain effect or modification was introduced? By introducing an interactive viewer to each and every node, the user can now visually see where things change, follow their data flowing through the operators more easily. This was a breakthrough when compared to the previous workflow of loading each node into a single viewer or scouring through parameters to decipher where something was changed.

    How has user input influenced you in the continuing development of the interface and engine?

     Our users have helped guide what features we have built into TouchDesigner, either through specific requests and unique project requirements or by popular demand. For example, Kinect functionality was something our users really wanted to experiment with, so we built it in. 

    What do the majority of TouchDesigner users use TouchDesigner for?

     The uses are so varied and diverse that question is hard to answer. If we had to generalize, any form of visualization that lets you interact with it! More specifically, we've seen a large increase in people using it for high performance, large scale multi-display installations and live shows.

    What has surprised you about how people are using TouchDesigner?

    It always a nice surprise when we find out about a new studio or individual (who we've never heard of) doing amazingly creative work somewhere around the world. The TouchDesigner Vimeo group is a constant source of inspiration for us.
    We'll never forget then time when TouchDesigner was hooked up to large trees to monitor the trees response to music stimulation!

    Do you do personal projects using TouchDesigner? If so please describe..

     Yes, most of the employees at Derivative have also use TouchDesigner for personal projects. A variety of projects have been done from custom VJ video mixers, to art gallery exhibits, to large size prints, to oscilloscope rendering for a music video.

    Any plans for the future of TouchDesigner you care to disclose?

     TouchDesigner 088, the next version of TouchDesigner, is currently in beta and should be released early next year (2013). TouchDesigner 088 includes the addition of Python as the default scripting language along with a number of other new features. More information on the beta can be found here: http://www.derivative.ca/Events/2012/088BetaRelease/

     

     

     

     

  • VJ shows
  • platforms
  • projection mapping
  • visualization
  • interactivity
  • immersive environments
  • computer animation
  • Image de l’icône: 

    One platform layer to rule them all: Ultimate Coder Challenge

    $
    0
    0

    There is a fundamental problem when creating new hardware: you need software using it before anyone is willing to buy it. The problem with getting software written for new hardware, is that no one wants to put in the time to build applications using hardware that no one has bought yet. This chicken and egg problem has killed lots of cool hardware, from anyone who hasn't had the skills to develop their own killer apps, or the clout to convince the world that every one will buy their hardware. My first week of work on the challenge has been dedicated to trying to solve this problem.

    The plan is to write a platform layer like GLUT and SDL, but with a twist. Instead of giving the application access to inputs like mouse and keyboard, I describe generic inputs like pointers, axis and events. That way an application that supports a pointer, works just as well if the input comes from a mouse, a touch screen, a Waccom, WiiMote or any other hardware that you can use to point with. Obviously they have differences you can query; A mouse may have multiple buttons, while a touch screen only has one "button". (You cant "right click" on a touch screen, well yet. If you build that hardware my API will support it).

    Most platform libraries come as a DLL, and if we make this a open source DLL anyone could add what ever hardware support they wanted to it and any application using the DLL would be exposed to the new hardware. Great! Except if we wrote a DLL that supported every known hardware, it would obviously become huge and create a dependency hell, and what if someone created a version of the DLL that supported my sound system and someone else wrote a different version of the DLL that supported my pedals, how would I be able to play my racing game and hear the engine scream when I pushed the pedal?

    So I decided we need a different approach, Lets make the library lean and mean instead, but lets give it a plugin interface so that you can write modules for it that add new functionality. Each module is independent and can add as much or as little functionality as you want. The library itself is only a few files large so you can even just drop them in to your project and make a self contained executable that has no dependencies to any DLL. Nice and tidy!

    This week I set out to write this library dubbed "Betray" and I knew I was in for a bit of a Indirection hell but problems didn't arise where I thought they would.

    The first objective was to do a bit of house keeping and create a utility library to handle some platform specific things, that aren't related to windows, drawing or inputs. In about a day I wrote the sub library "imagine" to handle the following:

    -Directory management (Listing volumes and directories, had to do some
    work to unify unix and windows here)
    -My Application settings API (previously found in "Seduce")
    -Dynamic loading of Libraries and sharing of function pointers.
    (Needed for the plugin system)
    -Threads and Mutexes. (Previously in my old platform layer)
    -Execution. (Previously in my old platform layer)

    Then I went on to implementing the basic out of the box functionality of the betray library: (Much of this was code taken form older projects)

    -Opening a window with OpenGL/OpenGL ES Context (With FSAA)
    -Mouse / keyboard
    -Reading Cut/paste
    -Opening file requesters.
    -Directory search
    -Execute
    -Quad buffer stereoscopic. (It should work but i don't have a display
    to test it on :-()
    -Threads
    -Multi-touch (will still run on pre 7 Windows)
    -Fullscreen
    -Timers
    -Mouse warp

    See the API HERE

    This was quick and easy and I followed it by building a brand new plug-in API. It too went fairly pain less, although the constant passing around of function pointers got mind numbing after a while. Once done the new plug-in API supported:

    Allocation and setting of:
    -Multiple Pointers
    -1-3D Axis
    -Buttons with labels and key-codes.
    -The ability to hook in to the main loop.
    -The ability to listen to events from windows event pump
    -A sound API (Few features are still missing)
    -A settings API, so that plugins can communicate their settings to a
    application.
    -View Vantage
    -View direction

    See the API HERE

    I started out writing some test plugins for some hardware I found in my apartment like a Microsoft 360 controller. It worked brilliantly once I figured out what the DLL needed was really called (Not what MSDN says). Then I went on to write a plugin for TrackIR and that went reasonably well too.

    Then I had this idea that turned in to a Rabbit hole: What if the Betray API could trick the application in to drawing in to a texture instead of the the screen? Then (potentialy) a plugin could manipulate the screen output before its drawn to screen. You could do things like Color correction plugins (You could play Diablo as gloomy as you want!), plugins that could save out massive screen shots, and if you let the plugins draw more then once you could even support anaglyph 3D, and mult-iscreen CAVE environments!

    This was just too cool to not do, so I wrote all the code I though I needed to do this. Then I ran the code... Well it did nothing I thought it would. The problem is that the application has a bunch of OpenGL state and as soon as the plugin tries to access any OpenGL functionality it will need to set its own state, and thats a problem because it, A doesn't know what state OpenGL is in, and B upsets the applications state. I briefly considered trying to read out the current state so that plugins could put it back once it was done with it, but that would be a huge amount of work and wont be forward compatible as newer versions of OpenGL adds more state. The solution will have to be to use 2 OpenGL contexts and its starting to get complex so I will need to do way more work on this.

    Finally I came to the big price: The depth seeing camera intel sent me! I'm not at all convinced that depth seeing cameras are very good as interfaces but there is one particular feature Ive been looking for and that is the ability to get the vantage point of the user to the screen. A depth seeing camera should be able to very accurately compute the users head position in front of the computer.

    Initially I had some problems just from the fact that the API is C++, and I am a pure C programmer but my good friend Pontus Nyman was nice enough to lend a hand and write a C wrapper for the functionality I needed. So one night we sat down to tie together his nice wrapper with my nice plugin API. Intel has provided us with the Perceptual computing API that contains face tracking so this should be easy, but when we started looking at the data coming out of it, it wasn't very good. It was jerky, imprecise and often didn't pickup a faces more then a few times a second. All the output turned out to be 2D and it leads me to believe it isn't using the depth camera to help it do better facial recognition (the depth camera output was less noisy then the color cameras). You do get access to the depth buffer, but its warped and you need to do a look-up in to a uv table to map it over to the color image, the problem is that you cant do it the other way around so its hard to look up the depth in the depth buffer of the facial detection running on the color buffer. We did some hacks to get something out of it, and for a brief moment here and there it was working, but not at all reliable enough.

    I will give it a few more days, but right now its not looking very good. In theory I could write my own face detection code using the depth buffer alone that could be much better but that is a much larger project then i planed, for only a tangentially important feature. I want to begin work on my interface stuff this week, maybe its something I can look in to after GDC. This Week I intend to tie up all lose ends in the Betray platform, release it and move on to the new interface toolkit!

    Edit: Intel confirms that the algorithm is not using the depth map for face recognition, but they also suspect I have faulty camera (I sent them images), so they are sending me a new one. The cameras are Pre-Production so this kind of thing is expected. Very nice to get such fast feedback!

  • ultimate coder
  • ultimate coder challenge
  • Image de l’icône: 

  • Download
  • Sample Code
  • Technical Article
  • Good UI design from the other side - Ultimate Coder

    $
    0
    0

    This week Ive been thinking a lot about how to design a UI toolkit, and this is about to get very techy, because I would like to talk about API design.

    I prefer C to C++ and I'm not particularly fond of Object orientation (Although i use it on occasion). UIs is an area that are often thought of as a place where Object Oriented design realy shines, but I think that is because of how we think UIs should be designed. Lets have a look at how one would typically create a button in a UI system:

    void my_button_callback(void *user)
    {
        printf("Button was clickedn");
    }
    
    {
        UIContainer *c;
        c = create_ui_container();
        add_ui_button(c, x, y, "Click me!", my_button_callback, NULL);
        
        while(TRUE) /* our main loop */
        {
            manage_ui_container(c);
        }
    }
    

    The idea here is that we first describe our UI, in some kind of container, and a separate callback for the UI system to call, and then we let the UI system "manage" the UI for us. Its a fair bit of lines and indirection. This works OK if we want a "Fire and forget" UI where we define a static UI once and then use it over and over, but lets say we want to move the button around, then we need something like this:
    void my_button_callback(void *user)
    {
        printf("Button was clickedn");
    }
    
    {
        UIContainer *c;
        UIElement *e;
        c = create_ui_container();
        e = add_ui_button(c, x, y, "Click me!", my_button_callback, NULL);
        
        while(TRUE) /* our main loop */
        {
            move_ui_element(e, sin(current_time), cos(current_time));
            manage_ui_container(c);
        }
    }
    

    Now we need to have a lot of handles to manage our UI, We need "c", "e" and the callback "my_button_callback" its getting very cumbersome. Some development environments prefers to use a special tool to build UIs often with a graphical user interface. They in turn output special UI files that are loaded in to the application, and then we need to read in and query, and we get something like this:
    void my_button_callback(void *user)
    {
        printf("Button was clickedn");
    }
    
    {
        UIContainer *c;
        UIElement *e;
        c = create_ui_from_file("my_ui_design.dat");
        e = query_for_element("button");
        if(e != NULL)
            attach_callback_to_element(e, my_button_callback, NULL);
    
        while(TRUE) /* our main loop */
        {
            if(e != NULL)
                move_ui_element(e, sin(current_time), cos(current_time));
            manage_ui_container(c);
        }
    }
    

    This is still more complicated since you have to deal with issues deriving from not knowing the contents of the UI description file, and even if the UI tool provides you with a nice UI it gives you no hints on how to hook it up to your application. Callbacks are especially scary since you can get weird errors if they are declared wrong. So I decided to create a UI system using immediate mode. The same code as above would then look like this:
        while(TRUE) /* our main loop */
        {
            if(my_button(x, y, "Click me!"))
                printf("Button was clickedn");
        }
    

    How easy was that? No callback, no setup, just a button function that makes a button and returns TRUE if the user clicks on it. The code is WAY more readable and easy to understand and there is no indirection. This all works brilliantly until you actually start making a real interface (I wish everything always worked as good as they do in theory....). If we look at the my_button function we will soon realize that it does two separate things, one is detecting a click, and the other one is to draw a button on the screen. Usually we would like to separate the two out so that we dont have to do them in the same frequency. Well, "Betray" has a nice model for this, it calls the main loop function with 3 different arguments, DRAW, EVENT and MAIN (Advance time) and its part of the input structure. This means that we an write a main loop function that looks like this:
    void my_main_loop(BInputState *input, void *user_pointer)
    {
        if(my_button(input, x, y, "Click me!"))
            printf("Button was clickedn");
    
    }
    

    The my_button function can now it self determine if its in event, draw, or main mode. At a high level this UI code looks like it does one thing, but in fact it does 3 different things!

    A button is a very simple UI element to implement in this way because in a single frame you can determine if its triggered, (You do this by checking if a pointer is over the button and if the its active this frame but wasn't the last) but what about something like a slider? If you grab hold of a slider, but then subsequent frames it needs to remain active, therefor it needs to store state that lasts for more then one frame, and now our immediate mode model breaks. We need a persistent ID for the slider. The code for this may look something like this:
    void my_main_loop(BInputState *input, void *user_pointer)
    {
        static float value = 0;
        static void *slider_handle = NULL;
        
        if(slider_handle == NULL)
            slider_handle = create_slider_handle();
    
        my_slider(input, slider_handle, &value, x, y, "slider!");
    }
    

    Its starting to look an awful lot like the code in the beginning when we need to start keeping track of handles, especially since this code also omits freeing the handle. But wait a minute, if the slider handle only has to be a unique ID, and we dont use it to internally allocate data that needs to be freed, we could use a pointer to anything. In this case we can just use the static value itself.
    void my_main_loop(BInputState *input, void *user_pointer)
    {
        static float value = 0;
        my_slider(input, &value, &value, x, y, "slider!");
    }
    

    Now everything is simple and pretty again! We can use the pointer to anything we want as unique identifiers, and if we ever need a id, and dont have one we can just use malloc to get more:
    void my_main_loop(BInputState *input, void *user_pointer)
    {
        float *value;
        static char *ids = NULL;
        uint i;
    
        value = user_pointer; /* lets assume user_pointer keeps changing for sake of argument. */
        
        if(ids == NULL)
            ids = malloc(2600);
        for(i = 0; i < 2600; i++)
            my_slider(input, &ids[i], &value[i], x, y + i, "slider!");
    }
    

    Brilliant, this solves everything. Well almost. When we build a UI we want to traverse the description of that data differently depending on what we do. Take this example:
    void my_main_loop(BInputState *input, void *user_pointer)
    {
        draw_my_desktop(input, &ids[0], "picture_of_a_cat.jpg");
        if(draw_icon_on_desktop(input, &ids[1], x, y, "software.exe"))
            execute_software("software.exe");
        draw_window(input, &ids[2], x2, y2, "files");
        draw_content_in_window(input, &ids[3]);
    }
    

    This all makes very much sense if we are trying to draw. We want to draw the desktop first and then over it we draw the icons, then windows and their content. But what if we are trying to implement the event functionality? When "draw_icon_on_desktop" is called it cant really know if it can be clicked because "draw_window" has not yet been executed, so it cant know if the user is clicking on the icon on the desktop or a window covering it. For event handling purposes it would be much better if the code was written in reverse order like this:
    void my_main_loop(BInputState *input, void *user_pointer)
    {
        if(draw_content_in_window(input, &ids[0]))
            return;
    
        if(draw_window(input, &ids[1], x2, y2, "files"))
            return;
    
        if(draw_icon_on_desktop(input, &ids[2], x, y, "software"))
        {
            execute_software("software");
            return;
        }
        draw_my_desktop(input, &ids[3], "pictiure_of_a_cat.jpg");
    }
    

    But this break rendering, so does this kill the idea of a immediate mode UI toolkit? No not quite. Our Ids comes to the rescue. If we think about why do we click on something? Because we have seen it. If at the time of drawing we store where everything is being drawn, and remove stuff as it is being covered, we can end up with a accurate map of what is click-able and what is covered. When any button wants to know if it is click-able it just looks up its id in to this buffer to know if it is click-able or not. In fact the button doesn't even have to do the look-up, because the structure can once look-up what each pointer is over and then all other widgets can be ignored. As you can imagine making an API deal with all this without the user even noticing it, is a lot of work, but if the goal is to build the ultimate UI toolkit, then work should be expected.

    Normally for low latency you want to parse all your inputs first, and then draw it to screen, and this "hack" requires you to operate all input on the previous frame, not the current one. But if you think about it that makes more sense. What do you think the user is clicking on, something they have already seen, or something they expect to see next frame? Creating out collision model while drawing has another benefit, we can use the graphics systems transform to allow the buttons to move. For instance we can do this:
    void my_main_loop(BInputState *input, void *user_pointer)
    {
        if(input->mode == BAM_DRAW)    
        {
            r_matrix_push(NULL); /* similar to glPushMatrix */
            r_matrix_rotate(NULL, time, 0, 1, 0); /* similar to glRotate */
        }
    
        if(my_button(input, x, y, "Click me!"))
            printf("Button was clickedn");
    
        if(input->mode == BAM_DRAW)
            r_matrix_pop(NULL); /*  similar to glPopMatrix */
    }
    

    Now we can click on button spinning around the screen! OK so now we have made a very pretty system for people who likes to build UIs using code, but what if I want to build it in a nice tool? Well the solution is to build a tool that actually generates UI code. Then it can be used either to build UIs, or as a sample code generator for thous who like to write code.

    Next week, we are going to take a look at what the UI I'm working on will actually look and feel.
  • Eskil steenberg quel solaar UI design API
  • ultimate coder
  • Image de l’icône: 

  • Contest
  • Sample Code
  • Technical Article
  • Migrating Server Workloads to Red Hat Enterprise Virtualization on Intel® Xeon® Processor 2600-based Servers for Performance and Cost Improvements

    $
    0
    0

    Continued enhancements to Intel platforms and KVM-based Red Hat Enterprise Virtualization make platform refresh an attractive proposition. Independent testing commissioned by Intel and Red Hat demonstrates that open virtualization on refreshed servers, servers 2 years old or more, enables workloads to be supported on fewer hosts, reducing equipment and facilities requirements, as well as lowering operational expenses such as power, cooling, and support.

    Server Refresh with Intel® Xeon® processor E5-2690 and Red Hat Enterprise Virtualization:

    A Simple Path to Dramatic Performance and Cost Improvements Continued enhancements to Intel platforms and KVM-based Red Hat Enterprise Virtualization make platform refresh an attractive proposition, but when to make the move? Intel and Red Hat commissioned Principled Technologies, a technology assessment and testing firm, to quantify some of the potential benefits. A middleware application was moved, without changes, from a bare-metal server based on the Intel® Xeon® processor 5500 series to a virtual machine (VM) under Red Hat Enterprise Virtualization 3.1 on the Intel® Xeon® processor E5-2690. The VM delivered 90.3 percent greater application performance than the previous-generation bare-metal server. Even more dramatic results were generated by running a second, identical (but isolated) VM on the newer host: 143.8 percent greater application performance compared to the bare-metal server. Each VM had 16 virtual cores and 24 GB of virtual RAM, matching the 16 logical cores and 24 GB RAM on the bare-metal server. Both the VMs and the bare-metal server ran Red Hat Enterprise Linux* 5.8.2. This result demonstrates that open virtualization on refreshed servers, servers 2 years old or more, enables workloads to be supported on fewer hosts, reducing equipment and facilities requirements, as well as lowering operational expenses such as power, cooling, and support.

    Server Refresh: Migrating Server Workloads to Red Hat Enterprise Virtualization on Intel® Xeon® Processor 2600-based Servers

    Principled Technologies, a technology assessment and testing firm, virtualized a middleware application running on Red Hat Enterprise Linux hosted on a previous-generation bare-metal server onto a Red Hat Enterprise Virtualization VM on a two-socket server powered by Intel® Xeon® processors E5-2690. The migration caused minimal disruption and the single VM increased performance over the previous-generation server by 90.3 percent, with headroom to host additional applications. When a second VM was added to take advantage of the headroom, each VM still outperformed the previous-generation server, indicating that moving previous-generation servers to VMs backed by Intel® Xeon® processors E5-2690 can significantly improve overall Java performance while providing the benefits of both virtualization and new server technologies.

  • Développeurs
  • Partenaires
  • Linux*
  • Serveur
  • Entreprise
  • Code source libre
  • Virtualisation
  • URL
  • Loclville Case Study

    $
    0
    0

    By John Tyrrell

    Download Article


     Loclville Case Study.pdf [807.07 KB]

    Introduction


    Loclville is a free Windows* 8 app that provides an easy-to-use virtual community notice board. Developed by amateur app developer Zubair Lawrence, a Sr. Production Services Technician at Sony Pictures Imageworks, the app began life as a web site and was subsequently redesigned for the Intel-sponsored App Innovation Contest hosted by CodeProject in fall 2012.

    Visually inspired by classic cork pin boards, Loclville lets anyone post notices without the need to register, with users moderating the posts through a voting system. Central to the app’s functionality is the ability to set the geographical size of the community users want to interact with. The accessible design lets even the most reticent computer users easily engage with their local community online.


     Loclville running on Microsoft Surface*.

    The Loclville app is now available on the Intel AppUp® center and the Windows* Store, optimized for Ultrabook™ devices running Windows 8, and is also available for the Google Android* and BlackBerry* mobile devices.

    During development of Loclville, Lawrence collaborated with the originator of the idea for the concept and execution of the app, and with a graphic designer for the visual assets. Lawrence experienced numerous development challenges, many of which he encountered for the first time, from how to accurately manage the geo-localization inherent to the app’s core functionality, to programming for a touch UI on Ultrabook and handheld devices.

    Approach

    Lawrence approached the development of the app in two separate parts: the app side, which is the program that users download and interact with, and the server side. Each part had its own distinct development process.

    JavaScript* and HTML5

    Lawrence initially decided to use JavaScript as the main programming language for the app as one of the core technologies (the other being CSS3) for interactivity and animation in the HTML5 development environment. Based on this decision, he was ultimately able to reuse significantly more of the code and functionality from the web site than he originally thought possible, in addition to being able to keep the server-side calls.

    Though Lawrence had reservations about using JavaScript and HTML5 after hearing other developers’ concerns about speed, he found that JavaScript was able to deliver fast performance depending on how the code was written. Lawrence’s early decision to use JavaScript as the main language for the app was ultimately justified, and he described it as one of the best choices he made.

    AJAX

    During the coding of the original web site, Lawrence relied heavily on AJAX, the collection of client-side web technologies—including JavaScript and XML—that facilitates asynchronous web applications. For app creation, AJAX delivered a number of efficiencies, including the ability to reuse the web site code, particularly on the server side.

    The app was designed with API calls for the different features; for example, receiving posts or retrieving them from the drag-and-drop pocket feature. With one call, the app serves up all the posts for the area specified. The AJAX JavaScript-coded procedures from the site were reused for the app, which meant the app could more easily be completed within the project deadline.

    Server

    Lawrence first considered building the app using PHP on a Linux* server. However, concerns about scalability led him to experiment with, and eventually choose, the Azure* platform from Microsoft. Azure provides a virtual machine for each server, hence delivering a much more scalable server configuration with as many virtual servers as needed spread out globally through the platform’s distributed cloud solution. Windows Azure Mobile Services was employed for the push notifications, with all the images stored on the Windows Azure Content Delivery Network (CDN).

    HTML5 Encapsulator

    To create the desktop app, Lawrence Initially experimented with AppJS, the Intel AppUp encapsulator, and TideSDK, finding the latter a bit too complex and offering features he didn’t need. After these initial road tests, Lawrence decided to use the Intel AppUp encapsulator to build the HTML5 app, finding it to be an invaluable tool to quickly bring the app to the desktop.

    Once the basic app was up and running, Lawrence discovered that he required greater location support. To achieve this, Lawrence switched to the AppJS solution, which offered the HTML5 geo-location features and the Mac* and Linux support he needed.

    Store and Desktop

    Lawrence also made an early decision to develop simultaneously for the Windows Store and for the desktop in order to maximize the audience potential. Because of its association with Intel, Lawrence thinks the Intel AppUp center gives consumers a higher degree of confidence in the apps, with the extra testing helping to ensure the safety and security of content. In addition, the fact that the Intel AppUp center is not restricted to only the Windows 8 platform helped the app reach a much larger audience than it would otherwise.

    Development Challenges


    Location

    One of the most significant development challenges Lawrence faced was ensuring the integrity of the individual user location data, which is critical to the proper functioning of the app. Drawbacks with the Global Positioning System (GPS) data alone include uncertainty over the age of the data—there is always a possibility that the user is many miles from the location where the last reading was taken—and whether the GPS was actually correctly locked on when the data was read.


    Loclville screen layout on mobile.

    During the early stages of development, Lawrence noticed inaccuracies in the location data, with data that were either imprecise or showed multiple locations. To remedy this, Lawrence implemented a hybrid process with which the app collects both GPS data from the devices itself and IP address data. These readings automatically update throughout the user session to ensure that positioning information is as accurate as possible and to guarantee the useful functioning of the app’s key distance radius features.

    When serving up posts based on a user’s specific location, the server first receives a request for a post, and then compares the latitude and longitude coordinates of the requester with the latitude and longitude of the posts in the database. The posts within the defined radius are returned (Figure 1).

    Figure 1.Sample code used by the Loclville app that establishes the distance between sets of latitude and longitude coordinates.

    Touch and UI

    Because Lawrence had no prior experience implementing a touch UI, understanding the different requirements related to menus, screen size, button size, and positioning on-screen was challenging. Lawrence used a responsive design approach, designing in such a way that the app or web site provided an optimized viewing experience across a wide range of different devices and screen sizes. Using this approach, Lawrence was able to ensure that the app serves up the appropriate UI for the device it’s running on, delivering a high level of legibility and straightforward navigation, with a minimum amount of resizing, panning, and scrolling required from the user.


    Loclville running responsively on a variety of different screen configurations.

    To determine how the UI should respond, the app queries the pixel size of the screen and the device itself, then uses those conditions to decide how to draw the UI and adapt to any changes. If the user rotates the screen of the device—for a tablet or smartphone, for example—or if the web browser window is resized, the app responds and adjusts the UI accordingly.

    The original Loclville web site featured bold and simple buttons and menus that helped facilitate a relatively straightforward transition to smaller screens and touch controls. Tweaks were made to the drop-down menus and some of the links in order to optimize them for touch. These modifications included making buttons more finger-friendly by ensuring enough separation between them to help prevent accidental presses.

    Adapting to smaller screens also involved repositioning some of the menus to make better use of the limited space available. The Distance, Categories, and Post buttons were moved to a footer menu, and posts were changed to be shown in a list view, making it possible to show more in a smaller space. Because of the overall simplicity of the layout, Lawrence decided against implementing any specific changes related to flipping between portrait and landscape views on a phone or tablet. The overall result of the optimizations is an app that is well adapted to multiple devices and screen sizes.

    jQuery

    Another challenge that Lawrence faced was that he needed to make jQuery work in a Windows 8 JavaScript app. Unlike the desktop version of the app, the Windows Store app had to meet code compliance requirements, which meant porting the jQuery library to allow it to work with the Windows Store version.

    Figure 2 below shows the code implemented in the app for porting jQuery to work in the Windows 8 JavaScript app.

    Figure 2.Sample code from the Loclville app for porting jQuery.

    Testing

    Beta testing the app was an informal process, conducted primarily by collaborators, their friends, and by the users themselves. After each set of changes, Lawrence released a new version of the code, and then proactively gathered feedback from users regarding its functionality, usability, and bugs. Lawrence said the testing was the most difficult part of the development process, noting that users often behaved unpredictably when using the app, making the testing both very important and challenging.

    An example of unexpected behavior occurred when users were asked to input their address, a vital piece of data for the proper functioning of the app. Lawrence repeatedly saw an error stating that the app was unable to find the address. He eventually realized that the input question was ambiguous, causing users to enter their e-mail address, which naturally would not permit the app to pinpoint their physical location. The implementation of automatic geo-localization of the users using GPS and IP address data provided the solution.

    Lawrence used a touch screen Ultrabook device for testing to ensure the integrity and proper functioning of the touch interface alongside the traditional mouse and keyboard controls.

    Feedback from reviews has also proved useful on at least one occasion for identifying and fixing a bug, although Lawrence has been frustrated by the one-way nature of the communication with reviewers. The lack of ability to reply means there is no way to respond directly to let a reviewer know that a bug he or she discovered has since been fixed.

    Metrics

    To gather Loclville user behavior metrics, Lawrence implemented the open-source analytics tool Piwik*, which gathers data including the app version (for example, browser, Android, iOS*, or Windows 8), the user’s city, whether the user clicks a post, the link clicked to leave the site, and the session length. Lawrence also gathers his own data from the app, including number of posts, user location, and growth and usage patterns.

    Lawrence analyzes the data to determine where users are coming from and then offers direct support to those within that locality by creating specific posts, with the goal of helping users grow the community.

    Next Steps


    Some of the features that Lawrence is considering implementing in the future include:

    • Localization. Making the site available in languages other than English, although which languages and when this feature will be available have yet to be defined.
    • Monetization. The ability to pay to post commercial offers and information. This feature will be based on the same rules of user voting, thus encouraging only content with real value to users.

                                                                                           Promotional offers in the Loclville mobile app.

    • Family filter. Placing more power in the hands of the individual user, as opposed to the community, regarding what content a user sees; for example, providing the ability to filter profanity. This feature would also potentially include the ability to prevent unregistered users from posting pictures. One related challenge is finding a way to filter in multiple languages, a problem that Lawrence is still working to solve.
    • Social media integration. On-screen buttons that allow users to directly share Loclville content through a chosen social network such as Facebook*.
    • Near Field Communication (NFC). Functionality to allow users to share posts by touching two devices together using NFC technology.

    The next major planned update for Loclville is version 3.0, which is currently scheduled for release in September 2013.

    Conclusion


    Outside of the core coding process, Lawrence cited the following points as important to consider when creating apps:

    • Touch is important, but developers shouldn’t ignore the mouse and keyboard. Apps need to be user friendly for both types of user interface.
    • Awareness of screen sizes and orientation is crucial. Apps are now expected to run on more devices than ever, from the 10-inch screen of a Microsoft Surface* and 13- or 14-inch Ultrabook device screens to even a 30-inch desktop screen. With tablets and Ultrabook convertibles, developers also need to be aware of the different design demands of landscape and portrait layouts, and develop accordingly.
    • Take the time and effort to submit apps to both the Windows Store and the Intel AppUp center because the potential audience is huge.

    Despite the concerns he heard regarding the use of HTML5 and JavaScript as an app development platform, Lawrence firmly believes that the combination of HTML5 and AJAX technologies was the right choice, and he plans to continue using a similar workflow for subsequent apps.

    About the developer


    A 2009 Masters graduate of the Savannah College of Art and Design in Georgia, Zubair Lawrence started programming as part of his Visual Effects degree course and rapidly developed both a strong affection and a clear affinity for it. He has since developed his coding knowledge and abilities beyond the Python* programming that was part of his course, becoming a competent self-taught application programmer.

    In June 2011, Lawrence joined the CodeProject independent developer community (http://www.codeproject.com) using the moniker Helix Ten and continued to code avidly in his spare time even after joining California-based Sony Pictures Imageworks* in January 2012. In December 2012, Lawrence successfully entered the Loclville app into the Intel-sponsored App Innovation contest hosted on the CodeProject web site. Lawrence complements his professional work at Sony Pictures Imageworks with coding in his spare time, including ongoing support and updates to the Loclville app and web site.

    Helpful resources


    One of the most valuable resources for Lawrence during the development of the Loclville app was the development community on CodeProject, the site that hosted the Windows* 8 & Ultrabook™ App Innovation Contest. During development of the app, Lawrence interacted frequently with the community members, and he still visits the site frequently. The Intel HTML5 Encapsulator was invaluable in allowing Lawrence to bring the app to the desktop quickly, with a great deal of documentation and videos available to support the development process. Lawrence cited the documentation available on the Intel AppUp center as a vital resource during the app submission process, describing it as being very clear and thorough.

    Portions of this document are used with permission and copyright 2012 by CodeProject. Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see software.Intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, Intel AppUp, Intel Atom, the Intel Inside logo, and Ultrabook, are trademarks of Intel Corporation in the U.S. and/or other countries.  *Other names and brands may be claimed as the property of others. Copyright © 2013. Intel Corporation. All rights reserved

  • ultrabook
  • Windows* 8
  • Windows store
  • Windows desktop
  • Loclville
  • virtual community
  • notice board
  • app innovation contest
  • Développeurs
  • Microsoft Windows* 8
  • Bureau Microsoft Windows* 8
  • IU de style Microsoft Windows* 8
  • Virtualisation
  • URL
  • Speeding Up Your Cloud Environment On Intel® Architecture

    $
    0
    0

    In my previous blog, I discussed “Ways to Speeding up Your Cloud Environment…”, I will continue with this thread by introducing the topic of Software Defined Networks (SDN).  The industry has been depending on proprietary networking equipment and appliances, essentially creating an environment requiring vertical integrated software running on dedicated hardware. Due to millions of new connected devices and increasing traffic in the cloud computing environment, the network congestion challenges the vertical networking business model.  As a result, the cloud computing community is looking into a network virtualization solution. 

    This blog focuses on speeding up your data packet networking by using the Intel® Data Plane Development Kit (Intel® DPDK) on Intel® Architecture.  With an Intel® Xeon® processor E5-2600 series (or later), an integrated DDR3 memory controller, an integrated PCI Express controller, and Intel DPDK, you can potentially see the increase for small network packet throughput in your cloud computing environment.  Before going into the Intel DPDK, I want to provide some insight for those unfamiliar with the terminology related to the SDN.

    Common Terminology for SDN (from Wiki)  

    • SDN is a form of network virtualization where the control plane is separate from the data plane, and is implemented in a software application. SDN architecture allows network administrators to have programmable control of network traffic without the need to access to the network's hardware devices.
    • Control plane is the part of the router architecture concerned with the information in a routing table that defines what to do with incoming packets. 
    • The data plane defines the part of the router architecture that decides what to do with packets arriving on an inbound interface.  
    • Network virtualization is the process of combining hardware, software network resources, and network functionality into a single software-based administrative entity. 
    • A virtual network is a link that does not consist of a physical (wired or wireless) connection between two computing devices.  Network virtualization involves platform virtualization, often combined with resource virtualization.
    • Platform virtualization hides the physical characteristics of a computing platform from users, instead showing another abstract computing platform.
    • Hypervisor is the software that controls virtualization.

    Intel’s 4:1 Workload Consolidation Strategy

    Intel’s strategy is to consolidate the workloads (application, control plane, packet and signal processing) into a more scalable and simplified solution on Intel® Xeon® processor platforms.  This software-based approach depicted in Figure 1 shows the Intel’s 4:1 workload consolidation strategy.  Figure 2 and Figure 3 show the performance increases from various generations of Intel architecture processor-based platforms.

    Figure 1. Intel's 4:1 Workload Consolidation Strategy

    Note: Performance tests and ratings below are HW/SW configuration dependent and measured using specific computer systems and/or components as measured by those tests. Any difference in the configuration will reflect in the test results.


    Figure 2. Breakthrough data performance with Intel® development kit (Intel® dpdk) L3 packet forwarding

    Note: The measurement is in Million packets per second (Mpps) and each packet is 64 byte. The data performance (L3 packet forwarding) indicated that you can experience higher throughput by applying the Intel DPDK to your Linux environment.

    Figure 3. IPv4 Layer 3 Forwarding performance for various generations of Intel Architecure Processor-based platforms

    Figures 2 and 3 show the small packet performance achievable using the Intel Architecture with the Intel DPDK.    The hardware elements that contribute to the performance increase are the integrated memory controller, the integrated PCI Express* controller, and the increase number processor cores per chip in the latest Intel processor.   

    The System Configurations Used for collecting the data used in Figure 2 and Figure 3 were:

    • Dual Intel® Xeon® processors E5540 (2.53 GHz, 4 core) processed 42 Mpps.
    • Dual Intel® Xeon® processors E5645 (2.40 GHz, 6 core) processed 55 Mpps.
    • A single Intel® Xeon® processors E5-2600 (2.0 GHz, 8 core) processed 80 Mpps (with Intel® Hyper-Threading Technology (Intel® HT Technology) disabled).
    • A dual Intel® Xeon® processors E5-2600 (2.0 GHz, 8 core) processed 160 Mpps (with Intel® HT Technology disabled) and 4x 10GbE dual port PCI Express* Gen2 NICs on each processor.

    Intel DPDK Overview

    The Intel DPDK is based on simple embedded system concepts and allows users to build efficient small packet (64byte) high performance applications.  It consists of a growing number of libraries (Figure 4) designed for high speed data packet networking, and offers a simple software programming model that scales from Intel® Atom™ processors to the latest Intel® Xeon® processors. The source code is available for developers to use and/or modify in a production network element.

    • The Environment Abstraction Layer (EAL) provides access to low-level resources (hardware, memory space, logical cores, etc.) through a generic interface that hides the environment specifics from the applications and libraries.
    • The Memory Pool Manager allocates NUMA-aware pools of objects in memory.  The pools are created in huge-page memory space to increase performance by reducing translation look aside buffer (TLB) misses, and a ring is used to store free objects.  It also provides an alignment helper to ensure objects are distributed evenly across all DRAM channels, thus balancing memory bandwidth utilization across the channels.
    • The Buffer Manager reduces the amount of time the system spends allocating and de-allocating buffers.  The Intel DPDK pre-allocates fixed size buffers, which are stored in memory pools for fast, efficient cache-aligned memory allocation and de-allocation from NUMA-aware memory pools.  Each core has a dedicated buffer cache to the memory pools, which is replenished as required.  This provides a fast and efficient method for quick access and release of buffers without locks.
    • The Queue Manager implements safe lockless queues instead of using spinlocks that allow different software components to process packets, while avoiding unnecessary wait times.
    • The Ring Manager provides a lockless implementation for single or multi producer/consumer en-queue/de-queue operations, supporting bulk operations to reduce overhead for efficient passing of events, data and packet buffers.
    • Flow Classification provides a proficient mechanism for generating a hash (based on tuple information) used to combine packets into flows, which enables faster processing greater throughput.
    • Poll Mode Drivers for 1 GbE and 10 GbE Ethernet controllers greatly speed up the packet pipeline by receiving and transmitting packets without the use of asynchronous, interrupt-based signaling mechanisms, which have a lot of overhead.

    Figure 4. Major Intel DPDK Components

    The Intel DPDK library is currently provided cost-free to OEMs under a BSD licensing model. A public version of the software will be available to download in early 2013.  For more information, see www.intel.com/go/dpdk

    Once you download the Intel DPDK, here is the suggested reading order to use the kit:

    Release Notes: Provides release-specific information, including supported features, limitations, fixed issues, known issues, and so on. It also provides frequently asked questions in FAQ format.
    Getting Started Guide: Describes how to install and configure the Intel DPDK; designed to get users up and running quickly with the software.
    Programmer's Guide: Describes:

    — The software architecture and how to use it (through examples), specifically in a Linux* application (linuxapp) environment.
    — The content of the Intel DPDK, the build system (including the commands that can be used in the root Intel® DPDK Makefile to build the development kit and an application) and guidelines for porting an application.
    — Optimizations used in the software and those that should be considered for new development.

    API Reference: Provides detailed information about Intel DPDK functions, data structures and other programming constructs.
    Sample Application User Guides: A set of guides, each describing a sample application that showcases specific functionality, together with instructions on how to compile, run and use the sample application.

    Conclusion

    The growing demand for more connected devices and data accesses over the network has pushed the vertical network model to the limit.  To save cost and reduce power consumption of the network infrastructure, you may consider decreasing the number of physical assets by consolidating their functions using network virtualization on a common platform.  By using the Intel DPDK library on a common platform, you can:

    • experience faster network packet processing,
    • potentially reduce cost by simplifying the hardware to industry standard server architectures,
    • conserve energy by using power-optimized Intel platforms,
    • and increase efficiency by maximizing the utilization of your existing environment.

     References:

    Packet Processing on Intel® Architecture:

  • cloud
  • optimization
  • performance
  • IA
  • processor
  • Cloud Computing
  • Image de l’icône: 

  • Technical Article

  • Intel oVirt Worksop

    $
    0
    0

    回到英特尔学术社区首页>>

    Intel oVirt Worksop | May 2013 | Shanghai, China 

    Ovirt strives to become the first and best truly open and comprehensive data center virtualization management suite. As the oVirt community is rapidly evolving and growing, one of the ways we look to connect is through oVirt Workshops around the globe. The May 2013 oVirt workshop successfully held at Intel's Shanghai Campus. This workshop was designed to encourage collaboration in our community, as well as help answer questions about the project from both a developer and user's perspective. Slides and videos are shared in the following event agenda.

    Workshop Agenda:

    Time

    Content

    Speaker

    May 8, 2013 (Wednesday) Track 1

    08:30-09:00

    Opening remarks and Keynote: Intel Open Source Strategy

    (PDF Download; Watch Video)

    He, Jackson (Intel)

    09:00-10:00

    oVirt Introduction

    (PDF Download; Watch Video)

    Doron Fediuck (Red Hat)

    10:00-11:00

    oVirt Architecture Overview

    (PDF Download; Watch Video)

    Dan Kenigsberg (Red Hat)

    11:00-11:15

    Coffee Break

    11:15-12:15

    Deploying and testing oVirt using nested virtualization

    (PDF Download; Watch Video)

    Mark Wu (IBM)

    12:15-13:30

    Lunch

    13:30-14:30

    oVirt SLA Overview

    (PDF Download; Watch Video)

    Doron Fediuck (Red Hat)

    14:30-15:00

    oVirt storage system and IBM's activity

    (PDF Download; Watch Video)

    Shu Ming (IBM) 

    15:00-15:15

    Coffee Break

    15:15-16:15

    Troubleshooting oVirt

    (PDF Download; Watch Video)

    Tal Nisan (Red Hat)

    16:15-17:00

    Converged Infrastructure with oVirt and Gluster

    (PDF Download; Watch Video)

    Theron Conrey (Red Hat)



    Time

    Content

    Speaker

    May 8, 2013 (Wednesday) Track 2

    08:30-09:00

    Opening remarks and Keynote: Intel Open Source Strategy

    (PDF Download; Watch Video)

    He, Jackson (Intel)

    09:00-10:00

    Gluster Community Overview and Roadmap

    (PDF Download; Watch Video)

    John Mark Walker (Red Hat)

    10:00-11:00

    Gluster Architecture Overview

    (PDF Download; Watch Video)

    11:00-11:15

    Coffee Break

    11:15-12:15

    oVirt Configurations and Gluster

    (PDF Download; Watch Video)

    Tal Nisan (Red Hat)

    12:15-13:30

    Lunch

    13:30-14:30

    Converged Infrastructure with oVirt and Gluster

    (PDF Download; Watch Video)

    Theron Conrey (Red Hat)

    14:30-15:30

    Gluster and Swift Object Store (UFO)

    (PDF Download; Watch Video)

    John Mark Walker (Red Hat)

    15:30-15:45

    Coffee Break

    15:45-16:45

    Developing with GlusterFS - translator framework, libgfapi and more

    (PDF Download; Watch Video)

    Vijay Bellur (Red Hat)



    Time

    Content

    Speaker

    May 9, 2013 (Thursday) 

    09:00-10:00

    oVirt-Node Overview

    (PDF Download; Watch Video)

    Ying Cui and Guohua Ouyang (Redhat) 

    10:00-11:00

    Support oVirt on Ubuntu

    (PDF Download; Watch Video)

    Zhengsheng Zhou (IBM)

    11:00-11:15

    Coffee Break

    11:15-12:15

    oVirt SLA-MoM as host level enforcement agent

    (PDF Download; Watch Video)

    Doron Fediuck (Red Hat)

    12:15-13:30

    Lunch

    13:30-14:30

    Trusted Compute Pools Deep Dive

    (PDF Download; Watch Video)

    Wei Gang

    14:30-15:30

    KVM Nested Virtualization

    (PDF Download; Watch Video)

    Dongxiao Xu

    15:30-15:45

    Coffee Break

    15:45-16:45

    The present and future of SetupNetwork in oVirt

    (PDF Download; Watch Video)

    Dan Kenigsberg (Red Hat)

    16:45-17:15

    Closing remarks and clsing keynote

  • oVirt
  • Développeurs
  • Partenaires
  • Professeurs
  • Étudiants
  • Avancé
  • Débutant
  • Intermédiaire
  • Informatique cloud
  • Code source libre
  • Virtualisation
  • Fichiers joints protégés: 

  • URL
  • HAXM always crashed after run Maps.apk

    $
    0
    0

    My HAXM (process emulator-x86.exe) always crashed after run Maps.apk in AVD from Google APIs (x86 System Image) version 19 revision 4, Google ADT version 22.6.2, HAXM version 1.0.7 on Windows 8.1 Enterprise x64

    Trouble connecting Atom device via USB

    $
    0
    0

    Hi.

    I'm using Beacon Mountain for Windows on Windows 7 SP 1. I installed the Android USB driver for Windows version 1.1.5 which comes with it.  When I connect the cable Windows sees the connection and tries to install driver software but does not install the "Android" device and so I cannot push my app to the device to test it.  (Eclipse does not see the device.)

    I have tried removing the device driver, reinstalling it, rebooting in between, just about every combination of hoops that one normally jumps through with this, and nothing is working.

    For the work I'm doing I have to run on the Atom device I have; the emulator environment won't do.

    Suggestions, pointers, directions, etc. will be most welcome.

    Thanks,

    Aharon Robbins

    Yoga App icon creation

    $
    0
    0

    0cm;line-height:14.65pt;background:#FFFCE5">Hi guys,

    I have just started learning more about Intel XDK. I am loving it and Connie Brodie's lesson was brilliant. Unfortunately she is missing just one video on how to release that app on google play, how does one create an icon that’s linked to the whole app. So that when users download and install the app they get the icon automatically?

    Thanks Connie your tutorials are the best and very good. Keep it up.

    0cm;line-height:14.65pt;background:#FFFCE5">I am also failing to download Andrew's video on 'Deploying apps to google play video please send me a link thanks a lot.

    problem with 4.4.2 image and HOST GPU setting

    $
    0
    0

    This post just to notify a small problem with 4.4.2 (api19) x86 image and the HOST GPU setting.

    The problem affects the whole emulator and not only apps. Basically when displaying something animated (or even when user swipes the home screen), suddenly the animation stops and the display seems stuck at frame when animation stopped. Animation does not continue any more and when it stops in the middle of a swipe animation, also swipe gesture does not work any more. You have to press menu button to unlock display (probably because display of something else is requested to the system). Also in-app small animation are affected, for example indeterminate progress bar in action bar makes a few round and stops.

    If I disable HOST GPU, everything works fine. But it is strange because I have no problem at all with 4.2 (api17) image and HOST GPU enabled.

    My machine is Win7 64bit with core i7 cpu, 6GB ram, integrated graphics. Tried with both HAXM 1.0.6 and 1.0.8.

    In can make screenshots of the stuck display if someway useful.

    Viewing all 277 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>