Friday 25 August 2017

Moving Media Sas Enterprise Guida


Alcune persone vedono i dati come fatti e cifre. Ma la sua più di questo. La sua la linfa vitale del vostro business. Esso contiene la cronologia di organizzazioni. E la sua cercando di dirti qualcosa. SAS aiuta a dare un senso del messaggio. In qualità di leader nei software e servizi di Business Analytics, SAS trasforma i dati in informazioni che ti danno una nuova prospettiva sulla vostra attività. È possibile identificare che cosa è lavoro. Fissare ciò che è neanche. E scoprire nuove opportunità. Siamo in grado di aiutarvi a trasformare grandi quantità di dati in conoscenza che è possibile utilizzare, e lo facciamo meglio di chiunque altro. Non c'è da meravigliarsi la stragrande maggioranza dei clienti continuano ad utilizzare anno dopo anno SAS. Crediamo proprio perché assumiamo grandi persone per creare grande software e servizi. SAS è leader nelle analisi. Attraverso analisi innovative, software di business intelligence e gestione dei dati e dei servizi, SAS aiuta i clienti a più di 83.000 siti di prendere decisioni migliori in tempi brevi. Dal 1976, SAS è stato dare ai clienti di tutto il mondo THE POWER TO KNOW. SAS Analytics in azione più di quattro decenni di esperienza e innovazione. Scopri perché SAS è leader di analisi. SAS offre soluzioni collaudate che guidano l'innovazione e migliorare le prestazioni. SOCIETÀ Fatti amp Financials Numero di Paesi installata SAS ha clienti in 148 paesi. Totale In tutto il mondo i siti dei clienti Il nostro software è installato in siti di più di 83.000 imprese, governo e università. Fortune Global 500 clienti 94 delle prime 100 aziende sul 2016 Fortune Global 500 sono clienti SAS. I dipendenti in tutto il mondo 14.063 totale organici distribuiti per Geografia Stati Uniti: 7.119 Sedi del mondo (Cary, NC): 5.600 Canada: 335 America Latina: 436 in Europa, Medio Oriente e Africa: 3.720 Asia Pacifico: 2.453 SAS (quotsassquot pronunciato) sorgeva per l'analisi quotstatistical system. quot E 'iniziato presso la North Carolina State University come un progetto per analizzare la ricerca agricola. La domanda di tali funzionalità software ha cominciato a crescere, e SAS è stata fondata nel 1976 per aiutare i clienti in tutti i tipi di industrie provenienti da aziende farmaceutiche e le banche ad enti accademici e governativi. SAS sia il software e l'azienda ha prosperato nel corso dei prossimi decenni. Lo sviluppo del software ha raggiunto nuove vette nel settore, perché potrebbe funzionare su tutte le piattaforme, utilizzando l'architettura multivendor per la quale è conosciuta oggi. Mentre la portata della società si è diffusa in tutto il mondo, la cultura aziendale incoraggianti e innovativo è rimasta la stessa. Esplora ogni epoca della nostra storia aziendale attraverso varie foto e le descrizioni di come SAS è venuto essere. radici accademiche North Carolina State University, che si trova nella capitale di Raleigh, North Carolina, è diventato il leader del consorzio, in gran parte perché aveva accesso a un più potente computer mainframe rispetto alle altre università. Il progetto ha finalmente trovato una casa nel Dipartimento di Statistica. di leadership primi docenti North Carolina State University Jim Goodnight and Jim Barr sono emersi come i responsabili del progetto Barr creando l'architettura e Buonanotte attuazione delle caratteristiche che sono seduti in cima al architettura e ampliato le funzionalità dei sistemi. Quando NIH interrotto il finanziamento nel 1972, i membri del consorzio hanno deciso di chip 5000 a testa ogni anno per consentire NCSU di continuare a sviluppare e mantenere il sistema e supportare le loro esigenze di analisi statistica. Espansione squadra e la clientela Nel corso degli anni successivi, il software SAS è stato concesso in licenza da aziende farmaceutiche, le compagnie di assicurazione e banche, oltre che dalla comunità accademica che aveva dato vita al progetto. Jane Helwig, un altro dipendente Dipartimento di Statistica a NCSU, ha aderito al progetto come scrittore di documentazione, e John Sall, uno studente laureato e programmatore, completato il core team. Cambiare il software modo è venduto attività di vendita trasferiti fuori dalle telemarketing a una forza vendita diretta, con una particolare attenzione per i territori geografici. L'azienda ha introdotto il suo primo gruppo di vendita verticali con il rilascio di software SASPH-clinica per l'industria farmaceutica. E la domanda di soluzioni pacchettizzate, progettate per soddisfare le esigenze di business specifiche in diversi settori, ha portato alla creazione della divisione Business Solutions, responsabile di soluzioni come SAS Financial Management e SAS Human Capital Management (in precedenza denominato CFO Vision e HR Vision). Nuova attenzione per l'istruzione L'azienda si trasferisce in un nuovo territorio attraverso lo sviluppo di alta qualità, le risorse curriculum online per l'aula. SAS Curriculum Percorsi in linea risorse interattive concentrarsi su materiali difficili da trasmettere attraverso metodi di insegnamento tradizionali. Gli strumenti coprono argomenti attraverso corsi che si può fare, vedere e ascoltare, fornendo informazioni e favorire approfondimenti in modi che i libri di testo non può. Il software consente agli insegnanti di mantenere gli studenti impegnati e di apprendimento favorendo l'uso della tecnologia in classe. sostegno reale per il processo decisionale più importante, SAS si è posta oltre al pacchetto come un fornitore di software di supporto decisionale con funzionalità estese in settori quali l'analisi dei dati e l'analisi guidata studi clinici e di reporting. L'azienda ha introdotto il software per la creazione di sistemi informativi direzionali personalizzati (EIS) e ha lanciato il suo programma di Warehousing Rapid. Dal momento che Internet è diventato uno strumento più vitale per il mondo delle imprese, la domanda di software Web-enabled è cresciuto. Di conseguenza, SAS ha portato funzionalità Web-enabled per il suo software, che permette ai clienti di utilizzare soluzioni SAS per diventare ancora più competitivi in ​​un ambiente di business in rapida crescita. Capire il cliente con le sue potenti funzionalità di data mining, SAS è in grado di prendere l'iniziativa in una zona che era più richiesta di qualsiasi altro software di business offerta disponibile - customer relationship management. Ora abilitato per il Web con le nuove soluzioni di e-intelligence, SAS ha continuato a rimanere all'avanguardia del settore del software aziendale. Il riconoscimento continua a rotolare in riconoscimento per avere i prodotti software di qualità ha continuato a provenire da molte fonti in tutto il mondo, tra cui Datamation. Data Warehousing mondo. Magazine Software. ComputerWorld Brasile e PC Week. insieme con la prestigiosa associazione analista francese Yphise, e Fondazione Australias Corporate Research. Inoltre, la US Food and Drug Administration ha riconosciuto l'integrità del software SAS selezionando la tecnologia SAS come standard per le nuove applicazioni della droga. SAS ha continuato ad essere riconosciuto come un luogo ideale per lavorare, ricevendo riconoscimenti da FORTUNE. Madre di lavoro. BusinessWeek e Mother Jones riviste, insieme con la stampa di primo piano e la copertura mediatica in onda negli Stati Uniti, in Europa e certificazione Australia. Advanced Analytics Espandi la impostare la vostra abilità analitiche. Renditi più commerciabile. E diventare una risorsa più apprezzato imparando le più recenti tecniche di analisi avanzate per risolvere le sfide di business critici attraverso tutti i campi. Il programma SAS Certified avanzata Analytics professionale offerto dall'Accademia SAS per la Scienza dei dati sia in una classe e mescolato formato sarà ampliare la conoscenza, approfondire le tue capacità di analisi e di espandere i propri orizzonti di apprendimento. A proposito del programma di certificazione Analytics Advanced è questo diritto programma per me questo programma è per coloro che vogliono approfondire le loro conoscenze e le competenze di analisi avanzata. E 'necessario un forte background in matematica applicata. Si raccomanda un master o superiore in un settore quantitativa o tecnico, ma non obbligatorio. Prerequisiti Per iscriversi al programma, è necessario almeno sei mesi di esperienza di programmazione in SAS o in un altro linguaggio di programmazione. Se siete appena iniziato o hanno bisogno di migliorare le tue capacità di programmazione SAS, la programmazione SAS per Data Science Fast Track vi darà una buona base. Si consiglia inoltre di avere almeno sei mesi di esperienza utilizzando la matematica statistica eo in un ambiente aziendale. Si può iniziare con le statistiche 1: Introduzione alla ANOVA, regressione o regressione logistica corso, che è disponibile come istruttore o online gratuito di e-learning. La certificazione che Im ottenendo dall'Accademia SAS per Science Data mi dà più credibilità quando si parla ai decisori. Etienne Ndedi SAS Academy for Data Science Graduate Argomenti trattati Apprendimento automatico e tecniche di modellazione predittiva. Come applicare queste tecniche per grandi insiemi di dati distribuiti e in memoria. rilevamento del modello. La sperimentazione nel mondo degli affari. tecniche di ottimizzazione. previsione delle serie temporali. capacità di comunicazione essenziali. software SAS coperto SAS Enterprise Miner SASETS SAS ad alte prestazioni Data Mining SAS In-Memory Statistics (PROC IMSTAT) SAS Studio SASOR SASSTAT SAS Text Miner SAS visivi strumenti statistici SAS per l'integrazione con l'open source Questo corso copre le competenze necessarie per assemblare diagrammi di flusso di analisi utilizzando SAS Enterprise Miner sia per la scoperta di modelli (segmentazione, l'associazione e la sequenza analisi) e la modellazione predittiva (alberi decisionali, regressione e modelli di reti neurali). Gli argomenti trattati Definizione di un progetto di SAS Enterprise Miner e l'esplorazione dei dati in forma grafica. Modifica dei dati per ottenere migliori risultati di analisi. Costruzione e comprendere i modelli predittivi, tra cui alberi decisionali e modelli di regressione. Confronto e spiegando modelli complessi. Generazione e utilizzando il codice punteggio. L'applicazione di associazione e la scoperta sequenza per i dati delle transazioni. Comunicare i risultati tecnici con un pubblico non tecnico Questo corso insegna come progettare e comunicare presentazioni efficaci attraverso l'auto-valutazione e discussioni circa l'organizzazione presentazione e l'uso efficace di supporti visivi. Riceverai un'analisi individuale del vostro stile di comportamento, compresa una descrizione dei vostri punti di forza e le opportunità di miglioramento, così come le strategie per comunicare con gli altri. Argomenti trattati diagnosi e la valutazione diversi stili di comportamento umano. Comunicare e far fronte in modo più efficace con i diversi tipi di persone. Usando i propri punti di forza e conoscenza degli altri per migliorare la comunicazione. Fornire le informazioni in un formato conciso e ben organizzato. La creazione di una presentazione, con il focus sulla comunicazione di informazioni non familiare o tecnica a un pubblico non tecnico. La progettazione di materiali di presentazione con chiarezza e scopo. Questo corso ti aiuta a capire e applicare due popolari della rete neurale perceptron multistrato algoritmi artificiali e funzioni di base radiali. Entrambe le questioni teoriche e pratiche di reti neurali di montaggio sono coperti. Argomenti trattati Costruire perceptron multistrato e reti neurali funzione di base radiale. La costruzione di reti neurali personalizzato utilizzando la procedura neurale. La scelta di un architettura di rete appropriata e determinare il metodo di allenamento in questione. Evitando overfitting reti neurali. Esecuzione di analisi autoregressivo serie temporali utilizzando le reti neurali. Interpretazione modelli di reti neurali. Predictive Modeling utilizzando la regressione logistica Il corso esplora la modellazione predittiva utilizzando il software SASSTAT, con particolare attenzione per la procedura di logistica. Argomenti trattati utilizzando la regressione logistica per modellare un comportamento individui in funzione degli ingressi noti. La selezione di variabili e interazioni. La creazione di trame effetto e trame odds ratio che utilizzano ODS grafica statistica. Gestione valori di dati mancanti. Affrontare multicollinearità nelle predittori. Valutare le prestazioni del modello e confronto di modelli. Ricodifica variabili categoriali in base al peso regolare delle prove. Utilizzando tecniche di efficienza per le enormi set di dati. Tecniche di data mining: analisi predittiva sui Big Data Questo corso introduce le applicazioni e le tecniche per il saggio e la modellazione dati di grandi dimensioni. Presenta strategie di modellazione di base e avanzate, come il gruppo-by di elaborazione per i modelli lineari, foreste casuali, modelli lineari generalizzati e modelli di distribuzione miscela. Potrai eseguire hands-on esplorazione e analisi utilizzando strumenti come SAS Enterprise Miner, SAS statistiche visivi e SAS In-Memory statistiche. Argomenti trattati Utilizzando applicazioni progettate per le analisi dei Big Data. Esplorare dati in modo efficiente. La riduzione dimensionalità dei dati. Costruire modelli predittivi usando alberi decisionali, regressioni, modelli lineari generalizzati, foreste casuali e support vector machines. costruzione di modelli che gestiscono obiettivi multipli. Valutare le prestazioni del modello. Implementazione di modelli e segnando nuove previsioni. Utilizzo di SAS per messo modelli Open Source in produzione Questo corso introduce le basi per l'integrazione di programmazione R e script Python in SAS e SAS Enterprise Miner. Gli argomenti sono presentati nel contesto di data mining, che comprende l'esplorazione dei dati, il modello prototipazione e tecniche di apprendimento supervisionato e non supervisionato. Gli argomenti trattati Calling pacchetti R in SAS. Utilizzo degli script Python in SAS. L'integrazione open source tecniche di esplorazione dei dati in SAS. L'integrazione di modelli open source in SAS Enterprise Miner. Creazione di produzione di codice (score) per i modelli R. In questo corso, imparerete ad usare SAS Text Miner per scoprire temi o concetti contenuti in raccolte di documenti di grandi dimensioni, in modo automatico documenti di gruppo in cluster di attualità sottostanti, classificare i documenti in categorie predefinite, e integrare i dati di testo con dati strutturati per arricchire sforzi di modellazione predittiva. Gli argomenti trattati Conversione di documenti archiviati in formati standard (Microsoft Word, Adobe PDF, ecc) in HTML generico o TXT. la lettura di documenti provenienti da una varietà di fonti (pagine web, file flat, elementi di dati in un database relazionale, celle del foglio, ecc) in tabelle SAS. L'elaborazione di dati testuali per l'estrazione di testo (ad esempio correggere errori ortografici o ricodifica acronimi e abbreviazioni). La conversione di dati di carattere non strutturati basati su testo in dati numerici strutturati. Esplorare parole e frasi in una raccolta documenti. Interrogazione raccolte di documenti utilizzando parole chiave (cioè documenti identificativi che comprendono parole o frasi specifiche). Identificare gli argomenti o concetti che appaiono in una raccolta di documenti. Creazione di tabelle argomento dall'utente influenzato da zero o modificare gli argomenti generato dalla macchina, o la creazione di concetti usando la conoscenza di dominio. Usando le tabelle argomento derivate o le tabelle preesistenti dall'utente influenzato argomento (o entrambi) per migliorare il recupero delle informazioni e la classificazione dei documenti. documenti di clustering in sottogruppi omogenei. Classificazione dei documenti in categorie predefinite. Modelli Serie Essentials In questo corso, youll imparare i fondamenti di dati di serie temporali di modellazione, con una particolare attenzione per l'utilizzo applicato dei tre principali tipi di modelli per l'analisi di serie temporali univariate: livellamento esponenziale, autoregressivo integrato media mobile con variabili esogene (Arimax) e componenti non osservabili (UCM). Gli argomenti trattati Creazione di dati di serie temporali. Accogliere tendenza, così come variazioni stagionali ed evento-correlati, nei modelli di serie storiche. Diagnosi, il montaggio e interpretare esponenziale levigatura, modelli Arimax e UCM. Identificare i punti di forza e di debolezza dei tre tipi di modelli relativi. Sperimentazione in Science Data Il corso esplora gli elementi essenziali di sperimentazione nel campo della scienza dei dati, il motivo per cui gli esperimenti sono al centro di tutti gli sforzi scientifici dei dati, e come progettare esperimenti efficienti ed efficaci. Argomenti trattati Definizione terminologia comune in esperimenti progettati. Descrivendo i benefici di esperimenti multifattoriali. Differenziazione tra l'impatto di un modello e l'impatto delle azioni intraprese da quel modello. modelli di risposta incrementali di montaggio per valutare il contributo unico di un messaggio di marketing, azione, intervento o cambiamento di processo sui risultati. Ottimizzazione Concetti per Science Data Il corso si focalizza sulla lineari, non lineari e ottimizzazione dell'efficienza concetti. I partecipanti impareranno a formulare problemi di ottimizzazione e come fare le loro formulazioni efficiente utilizzando insiemi di indice e gli array. dimostrazioni del corso comprendono esempi di analisi dei dati avvolgimento e ottimizzazione del portafoglio. La procedura OPTMODEL viene utilizzato per risolvere problemi di ottimizzazione che rafforzano concetti introdotti nel corso. Gli argomenti trattati Identificare e formulare impostazioni appropriate per risolvere i vari problemi di ottimizzazione lineare e non lineare. Creazione di modelli di ottimizzazione comunemente utilizzati nell'industria. Formulare e risolvere un analisi dei dati avvolgimento. Risoluzione di problemi di ottimizzazione usando la procedura OPTMODEL in argomenti SAS. Popular in questa sezione includono l'uso di REPORT PROC, SAS stili, modelli e ODS, così come una varietà di tecniche utilizzate per produrre risultati SAS in Microsoft Excel, PowerPoint e altri Ufficio applicazioni. Gli argomenti includono grafica, visualizzazione dei dati, la pubblicazione e reporting. argomenti popolari in questa sezione includono l'uso di stili SASGraph, SAS, modelli e ODS, così come una varietà di tecniche utilizzate per produrre risultati SAS in Microsoft Excel e altre applicazioni di Office. la scienza dei dati è considerato come un'estensione della statistica, data mining e analisi predittiva. Questa sezione si concentra su come quotthe più sexy di lavoro del 21 ° Centuryquot è fatto in SAS. Le aree di interesse: analisi del testo e dei dati dei social media. I presentatori preparano un display digitale che sarà disponibile per essere visto da tutti i partecipanti durante tutta la conferenza, piuttosto che condurre una presentazione lezione di stile. La sezione spesso mostra grafica ad alta risoluzione eo concetti stimolanti o idee che permettono un certo studio indipendente di partecipanti alla conferenza. Le presentazioni sono centrati intorno visualizzazione dei dati, tra cui PROC GPLOT, grafici animati, e altre personalizzazioni. Hands-on-Workshop fornire ai partecipanti lsquohands-on-the-keyboardrsquo interazione con il software SAS durante ogni presentazione. I presentatori guida i partecipanti attraverso esempi di tecniche software SAS e capacità, offrendo la possibilità di chiedere informazioni e di imparare attraverso la pratica. Tutti COME presentazioni sono dati da utenti esperti SAS che sono invitati a presentare. Questa sezione ha presentazioni integrazione dei dati, l'analisi e il reporting, ma con l'industria dei contenuti specifici. Esempi di contenuti guidato argomenti sono: Health Outcomes e metodi di ricerca Healthcare Standards dati e controllo di qualità per la presentazione dei dati di trial clinico per la FDA bancari, carta di credito, Assicurazioni e Assicurazioni contro i rischi di gestione di modellazione e analisi Questa sezione consente agli utenti SAS capire come immergersi in il ricco mondo delle risorse che vengono destinate al perseguimento di alta qualità SAS educationtraining, la pubblicazione, il social networking, consulenza, certificazione, assistenza tecnica, e le opportunità di affiliazione professionale e la crescita. Questa sezione consente agli utenti meno esperti SAS e gli altri a partecipare a una serie di presentazioni che li guiderà attraverso i concetti fondamentali della creazione di passo DATA e PROC sintassi Base SAS, seguito da due workshop pratici. Tutte le presentazioni SAS Essentials sono condotte da utenti esperti SAS che sono invitati a presentare. Se si dispone di un programma che esegue un lungo periodo di tempo, o sarà eseguito più volte, si potrebbe desiderare di tenere traccia di quanto tempo ogni parte del programma necessario per l'esecuzione. Questo può aiutare a trovare le parti lente del vostro programma e prevedere per quanto tempo un periodo futuro avrà. Questo documento presenta uno strumento per aiutare con questi problemi. La macro WriteProgramStatus fornisce un modo per creare un file di stato, facilmente leggibile da esseri umani o macchine. Al di là di IF THEN ELSE: Tecniche per l'esecuzione condizionale del codice SAS Quasi tutti i programmi SASreg include la logica che provoca certo codice che deve essere eseguito solo se vengono soddisfatte determinate condizioni. Questo è comunemente eseguita utilizzando l'IF. POI. Sintassi ELSE. In questo articolo, esploreremo i vari modi per costruire la logica condizionale SAS, tra cui alcuni che possono fornire vantaggi rispetto l'istruzione IF. Gli argomenti comprendono l'istruzione SELECT, le funzioni IFC e IFN, il scegliere e quali famiglie di funzioni, e la funzione COALESCE. Wersquoll anche assicurarsi che si capisce la differenza tra un normale se e la dichiarazione macro IF. Un App Waze per Base SASreg: instradando automaticamente insiemi intorno Locked dati, processi collo di bottiglia, e altro congestione del traffico su dati Superhighwa y L'applicazione Waze, acquistato da Google nel 2013, gli avvisi di milioni di utenti circa la congestione del traffico, le collisioni, la costruzione, e altri complessità della strada che possono ostacolare gli automobilisti tenta di andare da a a B. da rig jackknifed alle carcasse jackalope, le strade possono essere nodosi da ingorgo o disseminati di ostacoli che impediscono il flusso del traffico e l'efficienza. Algoritmi di Waze reindirizzare automaticamente gli utenti delle rotte verso la più efficiente in base agli eventi segnalati dagli utenti, nonché norme storiche che dimostrano condizioni stradali tipiche. Extract, Transform, Load (ETL) infrastrutture spesso rappresentano serializzati flussi di processo che possono mimare le autostrade, e che può diventare simile ringhiò da insiemi di dati bloccati, processi lenti, e altri fattori che introducono inefficienza. La macro LOCKITDOWN SASreg, introdotta wuss nel 2014, rileva e previene le collisioni di accesso ai dati che si verificano quando due o più processi SAS o utenti contemporaneamente tentano di accedere allo stesso insieme di dati SAS. Inoltre, la macro LOCKANDTRACK, introdotta wuss nel 2015, fornisce il monitoraggio in tempo reale dei parametri e storiche prestazioni per i set di dati bloccati attraverso una tabella di controllo unificato, consentendo agli sviluppatori di affinare i processi per ottimizzare l'efficienza e il throughput dei dati. Questo testo dimostra l'attuazione di LOCKSMART e le sue metriche di performance di blocco per creare, algoritmi a logica fuzzy data-driven che preventivamente reindirizzare il flusso del programma in giro per insiemi di dati inaccessibili. Così, piuttosto che inutilmente in attesa di un insieme di dati per diventare disponibili o di un processo per completare, il software anticipa in realtà il tempo di attesa in base a norme storiche, svolge altre funzioni (indipendenti), e ritorna al processo originale quando sarà disponibile. Inaugurando SAS Medicina d'urgenza nel 21 ° secolo: (. ALS) Verso Gestione delle eccezioni Obiettivi, azioni, risultati, e Comms medicina di emergenza e dispone di un continuum di cure, che spesso inizia con il primo soccorso, supporto vitale di base (BLS), o il supporto vitale avanzato First responder, tra cui i vigili del fuoco, tecnici di emergenza medica (EMT), e paramedici, spesso sono i primi a triage dei malati, feriti e in difficoltà, rapidamente valutare la situazione, fornendo assistenza curativa e palliativa, e il trasporto di pazienti di strutture mediche. servizi medici di emergenza (EMS) protocolli di trattamento e le procedure operative standard (SOP) assicura che, nonostante la singolare natura di ogni paziente, così come potenziali complicazioni, personale addestrato hanno una serie di strumenti e tecniche per fornire diversi gradi di cura in un standardizzato, modo ripetibile, e responsabile. Proprio come fornitori di EMS devono valutare i pazienti a prescrivere un corso efficace di azione, il software dovrebbe anche individuare e valutare la deviazione di processo o il fallimento, e allo stesso modo prescrivere il suo corso commisurato d'azione. La gestione delle eccezioni descrive sia l'identificazione e la risoluzione degli eventi avversi, inattesi, o intempestive che possono verificarsi durante l'esecuzione del software, e dovrebbero essere implementate in software SASreg che richiede affidabilità e robustezza. L'obiettivo della gestione delle eccezioni è sempre quello di reindirizzare il controllo di processo torna al trailquot quothappy o pathquotmdashi. e quothappy. il percorso processo originariamente destinato che fornisce valore di business completo. Ma, quando gli eventi accadono insormontabili, routine di gestione delle eccezioni dovrebbero istruire il processo, il programma o la sessione di interrompere con garbo per evitare danni o altri effetti indesiderati. Tra gli esiti opposti di un programma completamente recuperato e la cessazione del programma aggraziato, tuttavia, si trovano molti altri percorsi risoluzione eccezione in grado di fornire valore totale o parziale di business, a volte solo con un leggero ritardo. A tal fine, il testo dimostra questi percorsi e discute varie modalità interni ed esterni per comunicare le eccezioni per gli utenti SAS, sviluppatori e altre parti interessate. Wouldnrsquot sarebbe bello se il programma di lunga durata potrebbe toccare voi sulla spalla e dire lsquoOkay, Irsquom tutto fatto nowrsquo. Può il tuo suggerimento vi mostrerà quanto sia facile è di avere il vostro programma SASreg voi (o chiunque altro) inviare una e-mail durante l'esecuzione del programma. Una volta ottenuto il yoursquove semplici basi verso il basso, yoursquoll venire con tutti i tipi di usi per questa grande caratteristica, e yoursquoll chiedo come mai vissuto senza di essa. Trovare tutte le differenze in due librerie SAS utilizzano Proc Confronta Bharat Kumar Janapala Nelle industrie clinici convalida i set di dati per la programmazione parallela e proc confrontando questi insiemi di dati derivati ​​è una pratica di routine, ma a causa di costanti aggiornamenti in dati grezzi diventa difficile capire le differenze tra due librerie. L'attuale programma evidenzia tutte le differenze tra le biblioteche nel modo più ottimizzato con l'aiuto di Proc confrontare e di aiuto directory SAS. In primo luogo, il programma calcola i set di dati presenti in entrambe le librerie ed elenca i set di dati non comuni. In secondo luogo, il programma cerca il numero totale di osservazioni e variabili presenti in entrambe le librerie di set di dati ed elenca entrambe le variabili non comuni e set di dati con differenze nel numero di osservazione. In terzo luogo, assumendo entrambe le librerie sono identiche proc programma confronta i set di dati con nomi come e cattura le differenze che potrebbero essere monitorati, assegnando il numero massimo di differenze di variabili per l'ottimizzazione. Infine, il programma legge tutte le differenze e fornisce un rapporto consolidato seguito dalla descrizione dal set di dati. Let The Variable Aiuto Ambiente Ti: spostamento di file tra gli studi e creazione SAS Biblioteca On-The-Go Negli studi clinici, i set di dati e programmi SAS sono memorizzati in diversi studi in diversi prodotti in Unix. programmatori SAS hanno bisogno di accedere a tali posizioni di frequente, per leggere nei dati per la programmazione, o per copiare i file per il riutilizzo in nuove analisi. Digitando il percorso della directory lungo richiede molto tempo e snervante. Questo articolo descrive un modo efficiente per memorizzare i vari percorsi di directory in anticipo attraverso variabili d'ambiente. Queste variabili d'ambiente predefinite possono essere utilizzate per le operazioni sui file Unix (coping, la cancellazione, la ricerca dei file, ecc). Informazioni portato da queste variabili possono anche essere passato in SAS per la costruzione di librerie ovunque tu vada. Controllare prega: un approccio automatizzato al Log Controllo nel settore farmaceutico, ci troviamo a dover ri-eseguire i nostri programmi più volte per ogni deliverable. Questi programmi possono essere eseguiti singolarmente in una sessione SASreg interattivo che ci permetterà di esaminare i registri come eseguiamo i programmi. Potremmo eseguire il programma individuale in batch e aprire ogni singolo registro di rivedere i messaggi di log indesiderati, come ERROR, WARNING, non inizializzato, sono stati convertiti in, ecc Entrambi questi approcci sono bene se ci sono solo una manciata di programmi di eseguire. Ma cosa fare se si dispone di centinaia di programmi che hanno bisogno di essere ri-run Vuoi aprire ogni singolo uno dei programmi e la ricerca di messaggi indesiderati Questo approccio manuale potrebbe richiedere ore ed è soggetta a accidentalmente svista. Questo documento discuterà una macro che la ricerca in una directory specificata e verificare sia tutti i registri nella directory o controllare solo i registri con una specifica convenzione di denominazione o controllare solo i file elencati. La macro produce poi un rapporto che elenca tutti i file controllati e indica se sono stati rilevati problemi. Lasciate SASreg Do Your Dirty Work Fare in modo di avere tutte le informazioni necessarie per replicare un risultato finale salvato può essere un compito arduo. Si vuole fare in modo che tutti i set di dati grezzi vengono salvati, tutti i set di dati derivati, siano essi SDTM o Adam insiemi di dati, vengono salvati e si preferisce che i francobolli datetime sono conservati. Non solo è necessario il set di dati, è inoltre necessario conservare una copia di tutti i programmi che sono stati utilizzati per produrre il risultato finale, così come le corrispondenti registri quando sono stati eseguiti i programmi. Qualsiasi altra informazione che era necessario per conseguire i risultati necessari bisogno di essere salvato. Tutte queste esigenze essere fatto per ogni deliverable e può essere facile trascurare un passo o alcune informazioni chiave. La maggior parte delle persone che fanno questo processo manualmente e può essere un processo che richiede tempo, quindi perché non lasciare che SAS fare il lavoro per voi i file LST con Proc Confronta Risultato Manvitha Yennam e Srinivas vanam Il metodo più utilizzato per convalidare i programmi è il doppio di programmazione, che coinvolge due programmatori che lavorano su un unico programma e, infine, confrontando le loro uscite utilizzando procedura come ldquocomparerdquo. Il Proc Confrontare i risultati sono generalmente prodotti in file. LST. La maggior parte delle aziende fanno l'esame manuale controllando ogni file. LST per garantire che le uscite sono simili. Ma questo processo manuale richiede tempo e soggetto a errori. Lo scopo di questo lavoro è quello di utilizzare una macro SAS invece di seguire il processo di revisione manuale. Questa macro SAS legge tutti i file. LST fornito un percorso e crea un riepilogo dei file della lista e indicare se ha problema o no e anche il tipo di problema. Leggere qualsiasi pubblicazione, da media nazionali al tuo sito web di notizie locali. risultati scolastici, in particolare nei campi STAMINALI, è una grave preoccupazione e miliardi di dollari vengono spesi affrontare la questione. Come può SAS essere applicato per analizzare l'esito di un intervento, e, altrettanto importante, di trasmettere i risultati di tale analisi ad un pubblico non tecnico Utilizzando i dati reali delle valutazioni relative ai giochi educativi, questa presentazione passeggiate attraverso le fasi di una valutazione, da la valutazione delle necessità per la convalida della misura di confronto test pre-post. Le tecniche applicate includono PROC FREQ con opzioni per i dati correlati, PROC fattore di analisi fattoriale, PROC TEST. T e PROC GLM per misure ripetute ANOVA. uso Felice è fatta tutta di grafica statistica ODS. Utilizzando le procedure standard di SASSTAT, queste analisi possono essere eseguite su qualsiasi sistema operativo con SAS, tra cui SAS Studio su un iPad. Costruire intervalli di confidenza per le differenze di binomiale proporzioni in SASreg Dati due proporzioni binomiali, vogliamo costruire un intervallo di confidenza per la differenza. Il metodo più noto è il metodo Wald (cioè, approssimazione normale), ma può produrre risultati indesiderabili nei casi estremi (ad esempio, quando le proporzioni sono vicino 0 o 1). Numerosi altri metodi esistono, compresi i metodi asintotici, metodi approssimativi e metodi esatti. Questo articolo presenta 9 diversi metodi per la costruzione di tali intervalli di confidenza, 8 dei quali sono disponibili nelle procedure SASreg 9.3. I metodi sono confrontati e pensieri sono riportati sulla quale metodo usare. Una guida animata: incrementale di risposta Modeling in Enterprise Miner Alcune persone possono essere obbligati ad acquistare un prodotto senza alcun contatto di marketing. Se tutti i potenziali clienti vengono contattati, una società non può determinare il vero effetto di una manipolazione di marketing. Questo discorso utilizza il nodo RISPOSTA incrementale nel SASreg Enterprise Minertrade per risolvere un problema di marketing di base. Marketing in genere sono destinate, e spendere soldi contattare, tutti i potenziali clienti. Questo è uno spreco, in quanto alcune di queste persone sarebbero diventati clienti per conto proprio. Questo nodo utilizza un insieme di dati per i clienti separati in gruppi: 1) propensi ad acquistare 2) propensi ad acquistare se sono oggetto di campagne di marketing e 3) i clienti che si pensano che siano resistenti agli sforzi di marketing. L'impiego di analisi latente in longitudinali Studi: Un'esplorazione delle procedure SASreg Indipendentemente sviluppato questo documento esamina diversi modi per indagare variabili latenti nelle indagini longitudinali utilizzando tre procedure SASreg create in modo indipendente. Tre analisi diversi per latente scoperta variabile saranno riesaminati e esplorate: analisi latente di classe, l'analisi di transizione latente, e l'analisi della traiettoria latente. Le procedure di analisi latenti esplorati in questo lavoro (ognuno dei quali sono stati sviluppati al di fuori del SASreg Institute) sono PROC LCA, PROC LTA e PROC TRAJ. Le specifiche alla base di queste procedure e come per aggiungerli alla libreria onersquos procedura saranno esplorate e poi applicati ad un esplorativa domanda caso di studio. L'effetto delle variabili latenti sulla forma e uso del modello di regressione rispetto ad un modello simile utilizzando i dati osservati può anche essere brevemente recensione. I dati utilizzati per questo studio è stato ottenuto tramite il National Longitudinal Study of Adolescent Health, uno studio distribuito e raccolto da Aggiungi salute. I dati sono stati analizzati utilizzando SAS 9.4. Questo documento è destinato a moderata a livello avanzato utenti SASreg. Questo articolo è stato scritto anche ad un pubblico con un background in scienza statistica eo comportamentali. MIghty PROC MI per il salvataggio dei dati mancanti è una caratteristica di molti insiemi di dati, come i partecipanti possono ritirare dagli studi, non prevede misure di auto-riportati, e, a volte, i problemi tecnici possono interferire con la raccolta dei dati. Se usiamo solo osservazioni completato, ci ritroviamo con gli errori più grandi standard, intervalli di confidenza più ampi, e p-value più grandi. Manca metodi di dati come l'analisi caso completa o di imputazione possono essere utilizzati, ma i meccanismi di dati mancanti e modelli devono essere comprese prima. Questo articolo vi darà una panoramica dei dati mancanti fonti, modelli e meccanismi. Un set di dati completo verrà utilizzato per ottenere veri risultati delle analisi di regressione. verranno creati due insiemi di dati con i valori mancanti, uno con i dati mancanti del tutto a caso e una con i dati non manca a caso. Saranno applicati i metodi di dati mancanti del caso completo, singole e multiple di imputazione. Proc MI e MIANALYZE saranno utilizzati in SASreg 9.4 per l'analisi. I risultati dei metodi dati mancanti saranno confrontati tra loro e ai veri risultati. John Amrhein e Fei Wang motivato dalla frequente necessità di test di equivalenza negli studi clinici, questo documento fornisce intuizioni test per equivalenza. Riassumiamo e confrontare i test di equivalenza per i diversi disegni di studio, tra cui i disegni per il problema su un unico campione, i disegni per il problema su due campioni (osservazioni accoppiate, e due campioni indipendenti), e disegni con più bracci di trattamento. Potenza e la stima dimensione del campione sono discussi. Abbiamo anche dare esempi per attuare i metodi che utilizzano il FREQ, TEST. T, misto, e le procedure di potere nel software SASSATreg. Distanza di correlazione per i vettori: Una Macro SASreg Il coefficiente di correlazione di Pearson è ben noto ed ampiamente utilizzato. Tuttavia, soffre di alcuni vincoli: è una misura della dipendenza lineare (solo) e non fornisce una prova di indipendenza statistica, ed è limitato a variabili casuali univariate. Fin dalla sua istituzione, sono state proposte le relative misure e alternative per superare questi vincoli. Diverse nuove misure per sostituire o integrare correlazione di Pearson sono stati proposti nella letteratura statistica degli ultimi anni. Szekeley et al. (2007) descrive una nuova misura - distanza correlazione - che supera le carenze di correlazione di Pearson. Distanza correlazione è definito per 2 variabili casuali X, Y (che può essere vettori) in funzione del peso o distanza applicata alla differenza tra la funzione caratteristica comune (X, Y) e il prodotto delle singole funzioni caratteristiche per X, Y . In pratica si stima calcolando le singole matrici distanza per X, Y, e la correlazione distanza è una misura di somiglianza per le 2 matrici. Per il caso normale bivariato, correlazione distanza è una funzione di correlazione Pearson. Distanza di correlazione supporta anche un test relativo di indipendenza statistica. Distanza di correlazione ha ottenuto buoni risultati in studi di simulazione di confronto con altre alternative alla correlazione di Pearson. Qui vi presentiamo una macro di base SASreg per calcolare la correlazione a distanza per i vettori reali arbitrari. Determinare la funzionalità di pompe in Tanzania Utilizzando SASreg EM e VA dell'India Kiran Chowdaravarpu, Vivek Manikandan Damodaran e Ram Prasad Poudel Accessibilità da pulire e igienico dell'acqua potabile è un lusso di base ogni essere umano merita. In Tanzania, ci sono 23 milioni di persone che non hanno accesso all'acqua potabile e sono costretti a camminare miglia al fine di prendere l'acqua per le necessità quotidiane. Il problema prevalente è più di un risultato di scarsa manutenzione e inefficiente funzionamento delle infrastrutture esistenti, come pompe a mano. Per risolvere la crisi idrica attuale e garantire l'accessibilità all'acqua potabile, vi è la necessità di individuare le pompe non funzionali e funzionali che hanno bisogno di riparare in modo che possano essere riparati o sostituiti. E 'altamente costo inefficace e poco pratico per ispezionare la funzionalità di oltre 74.251 punti d'acqua manualmente in un paese come la Tanzania dove le risorse sono molto limitate. L'obiettivo di questo studio è quello di costruire un modello per prevedere che le pompe sono funzionali, che ha bisogno di qualche riparazione e che donrsquot lavoro a tutti, utilizzando i dati del Ministero dell'Acqua Tanzania. Troviamo anche le variabili importanti che predicono le pumprsquos condizioni di lavoro. Il dato è gestito da Taarifa punti d'acqua cruscotto. Dopo il pre-trattamento dei dati finale è costituito da 39 variabili e 74,251 osservazioni. Abbiamo usato Ponte SAS per ESRI e SAS VA per illustrare variazione spaziale di punti d'acqua funzionali a livello regionale della Tanzania insieme ad altre variabili socio-economici. Tra albero decisionale, reti neurali, modelli di regressione logistica e forestali HPrandom, HP modello foresta casuale è stato trovato il modello migliore. Il tasso di errata classificazione, la sensibilità e la specificità del modello sono 24,91, 62,7 e 91,7 rispettivamente. La classificazione delle pompe per l'acqua che utilizzano il modello campione sarà accelerare le operazioni di punti d'acqua che garantiranno acqua pulita e accessibile attraverso la Tanzania a basso costo e in un breve periodo di tempo di manutenzione. I modelli di soglia di montaggio utilizzando le procedure SASreg nLin e NLMIXED gerarchica lineari generalizzati modelli per Behavioral Salute rischio standardizzato per 30 giorni e 90 giorni di riammissione Tariffe I risultati nel programma clinico di eccellenza (ACE) incoraggia l'eccellenza in tutte le strutture della rete sanitaria comportamentali attraverso la promozione di quelli che fornire la più alta qualità delle cure. Due punti di riferimento principali di efficacia risultato nel programma ACE sono il rischio aggiustato di 30 giorni di riammissione e risk adjusted tassi di riammissione di 90 giorni. Regolazione rischio è stata effettuata con modelli lineari generali gerarchici (HGLM) per tenere conto delle differenze tra gli ospedali nelle caratteristiche demografiche e cliniche dei pazienti. Un anno di dati di ammissione amministrativi (30 giugno 2013 al 1 ° luglio, 2014) da pazienti per 30 giorni (N78,761, N Hospitals2,233) e 90 giorni (N74,540, N Ospedale 2.205) intervalli di tempo sono stati i Origine dei dati. HGLM contemporaneamente modelli due livelli 1) Modelli Level paziente ndash log-odds di riammissione in ospedale con l'età, il sesso, le covariate clinici selezionati, e un intercetta specifici ospedale, e 2) il livello Ospedale ndash un intercetta ospedale casuale che rappresenta la correlazione intra-ospedaliero della osservato. PROC GLIMMIX è stato utilizzato per implementare un HGLM con ospedale come una variabile casuale (gerarchico) separatamente per uso di sostanze disturbo ricoveri (Sud) e ammissioni di salute mentale (MH) e pool di ottenere un risk adjusted tasso di riammissione ospedaliera di larghezza. La metodologia HGLM è stato derivato dal Centers for Medicare amp Medicaid Services (CMS) la documentazione per il pacchetto di rischio standardizzato riammissione Misurare SAS 2013 Ospedale-Wide per tutte le cause. Questa metodologia è stata effettuata separatamente su 30 giorni e 90 giorni di dati di riammissione. Le metriche finali sono stati un rischio aggiustato di 30 giorni tasso di riammissione per cento di tutto l'ospedale e un rischio aggiustato di 90 giorni tasso di riammissione per cento di tutto l'ospedale. modelli HGLM crociata convalidato su nuovi dati di produzione che si sovrapponevano con il campione di sviluppo. modelli HGLM rivisti sono stati testati nel mese di aprile 2015, e le statistiche di esito erano estremamente simili. In breve, il test del modello riveduto cross-validato i modelli HGLM originali, perché i modelli rivisti erano basati su campioni diversi. Demistificare il contrasto e la PREVENTIVO Dichiarazione Molti analisti sono disorientati su come utilizzare le istruzioni CONTRASTO e valutare in SAS per testare una varietà di ipotesi lineari generali (GLH). GLHs può essere utilizzato per testare parsimonia confronti chiave e ipotesi complesse. Tuttavia, la creazione di un semplice GLH tende a intimidire alcuni utenti SAS. Gli esempi da varie fonti sembrano venire magicamente con la risposta corretta. La chiave è capire come la procedura parametrizza il modello e quindi utilizzare tale parametrizzazione per costruire il GLH. CONTRAST andor ESTIMATE statements can be found in many of the modeling procedures in the SAS. However, not all procedures use the same syntax for these statements. This presentation will demystify the use of the CONTRAST and ESTIMATE statements using examples in PROCs GLM, LOGISTIC, MIXED, GLIMMIX and GENMOD. Short Introduction to Reliability Engineering and PROC RELIABILITY to Non-Engineers Reliability engineering specializes how often a product or system fails under stated conditions over time. In the modern world, it is important for a product or system maintains for a long time. Because technology is well-developed these days, some systems will eventually fail. Mathematical and statistical methods are useful for quantifying and for analyzing reliability data. However, the most important priority of reliability engineering is to apply engineering knowledge to prevent the likelihood of failures. This paper introduces the idea of reliability engineering to non-engineers as well as PROC RELIABILITY that demonstrates some applications of reliability data. Simulating Queuing Models in SAS This paper introduces users to how to simulate queuing models using a set of SAS macros: MM1,MG1, and MMC. The SAS macros will simulate queuing system in which entities (like customers, patients, cars or email messages) arrive, get served either at a single station or at several stations in turn, might have to wait in one or more queues for service, and then may leave. After the simulation, SAS will give a graphical output as well as statistical analysis of the desired queuing model. Selection Bias: How Can Propensity Score Utilization Help Control For It An important strength of observational studies is the ability to estimate a key behavior or treatmentrsquos effect on a specific health outcome. This is a crucial strength as most health outcomes research studies are unable to use experimental designs due to ethical and other constraints. Keeping this in mind, one drawback of observational studies (that experimental studies naturally control for) is that they lack the ability to randomize their participants into treatment groups. This can result in the unwanted inclusion of a selection bias. One way to adjust for a selection bias is through the utilization of a propensity score analysis. In this paper we explore an example of how to utilize these types of analyses. In order to demonstrate this technique, we will seek to explore whether recent substance abuse has an effect on an adolescentrsquos identification of suicidal thoughts. In order to conduct this analysis, a selection bias was identified and adjustment was sought through three common forms of propensity scoring: stratification, matching, and regression adjustment. Each form is separately conducted, reviewed, and assessed as to its effectiveness in improving the model. Data for this study was gathered through the Youth Risk Behavior Surveillance System, an ongoing nationwide project of the Centers for Disease Control and Prevention. This presentation is designed for any level of statistician, SASreg programmer, or data analyst with an interest in controlling for selection bias. Using SAS to analyze Countywide Survey Data: A look at Adverse Childhood Experiences and their Impact on Long-term Health The adverse childhood experiences (ACEs) scale measures childhood exposure to abuse and household dysfunction. Research suggests ACEs are associated with higher risks of engaging in risky behaviors, poor quality of life, morbidity, and mortality later in life. In Santa Clara County, a large diverse county where 88 residents have household internet access, we conducted a county-wide Behavioral Risk Factor Survey of adults with a unique web-based follow-up. We conducted a random-digit-dial telephone survey (N4,186) and follow-up online survey using the CDC BRFSS ACE module. Of those eligible for the web-based survey, the response rate was 33. The online ACE module comprised 11 questions to form 8 categories on abuse and household dysfunction. PROC SURVEYFREQ and SURVEYLOGISTIC were used in SAS 9.4 to analyze survey data and provide county-wide estimates for Santa Clara County as a whole. Most respondents (74) reported having experienced 1 ACEs. Emotional abuse was the most common (44), followed by household substance abuse (28), and household mental illness (25). The prevalence of emotional abuse, household substance abuse, physical abuse, and household mental illness was highest among individuals with high (3) and low (1-2) ACEs. Indicators of perceived poor health showed a strong association among individuals with ACEs. The odds of 1 poor mental health days in the past month were higher among individuals with low ACEs (OR2.86), high ACEs (OR6.74), and among women (OR2.27). A web-based survey offers a reliable means to assess a population about sensitive subjects like ACE at lower cost than a telephone survey in smaller jurisdictions. Results suggest ACEs are common among adults in the county, and may be under-reported in telephone interviews. PROC SURVEYFREQ and SURVEYLOGISTIC in SAS are powerful tools that can be used to analyze survey data, especially for small area estimates on the health of county residents. How D-I-D you do that Basic Difference-in-Differences Models in SAS Long a mainstay in econometrics research, difference-in-differences (DID) models have only recently become more commonly used in health services and epidemiologic research. DID study designs are quasi-experimental, can be used with retrospective observational data, and do not require exposure randomization. This study design estimates the difference in pre-post changes in an outcome comparing an exposed group to an unexposed (reference) group. The outcome change in the unexposed group estimates the expected change in the exposed group had the group been, counterfactually, unexposed. By subtracting this change from the change in the exposed group (the ldquodifference in differencesrdquo), the effects of background secular trends are removed. In the basic DID model, each subject serves as his or her own control, removing confounding by known and unknown individual factors associated with the outcome of interest. Thus, the DID generates a causal estimate of the change in an outcome associated with the initiation of the exposure of interest while controlling for biases due to secular trends and confounding. A basic repeated-measures generalized linear model provides estimates of population-average slopes between two time points for the exposed and unexposed groups and tests whether the slopes differ by including an interaction term between the time and exposure variables. In this paper, we illustrate the concepts behind the basic DID model and present SAS code for running these models. We include a brief discussion of more advanced DID methods and present an example of a real-world analysis using data from a study on the impact of introducing a value-based insurance design (VBID) medication plan at Kaiser Permanente Northern California on change in medication adherence. Using PROC PHREG to Assess Hazard Ratio in Longitudinal Environmental Health Study Air pollution, especially combustion products, can activate metabolic disorders through inflammatory pathways potentially leading to obesity. The effect of air-pollution on BMI growth was shown by a previous study (Jerrett, et al. 2014). Recognizing the role of air pollution in the development of obesity in children can help guide possible interventions reducing obesity formation. The objective of this paper is to analyze the obesity incidence of children participating in Childrenrsquos Hospital Study (CHS) who were non-obese at baseline, identify the time interval for the onset of obesity, and identify the effects of various risk factors, especially air pollutants. The PROC PHREG procedure was used in creating a model within a macro that included community random effects, stratified by sex, and adjusting for baseline characteristics. Using PROC LOGISTIC for Conditional Logistic Regression to Evaluate Vehicle Safety Performance The LOGISTIC Procedure has several capabilities beyond standard logistic regression on binary outcome variables. For a conditional logit model, PROC LOGISTIC can perform several types of matching, 1:1, 1:M matching, and even M:N matching. This paper shows an example of using PROC LOGISTIC for conditional logit models to evaluate vehicle safety performance in fatal accidents using the Fatality Analysis Reporting System (FARS) 2004-2011 database. Conditional logistic regression models were performed with an additional stratum parameter to model the relationship between fatality of the drivers and the vehiclersquos continent of origin. Identifying Duplicates Made Easy Elizabeth Angel and Yunin Ludena Have you ever had trouble removing or finding the exact type of duplicate you want SAS offers several different ways to identify, extract, andor remove duplicates, depending on exactly what you want. We will start by demonstrating perhaps the most commonly used method, PROC SORT, and the types of duplicates it can identify and how to remove, flag, or store them. Then, we will present the other less commonly used methods which might give information that PROC SORT cannot offer, including the data step (FIRST. LAST.), PROC SQL, PROC FREQ, and PROC SUMMARY. The programming is demonstrated at a beginnerrsquos level. Dont Forget About Small Data Beginning in the world of data analytics and eventually flowing into mainstream media, we are seeing a lot about Big Data and how it can influence our work and our lives. Through examples, this paper will explore how Small Data - ndash which is everything Big Data is not - ndash can and should influence our programming efforts. The ease with which we can read and manipulate data from different formats into usable tables in SASreg makes using data to manage data very simple and supports healthy and efficient practices. This paper will explore how using small or summarized data can help to organize and track program development, simplify coding and optimize code. Let the CAT Out of the Bag: String Concatenation in SASreg 9 Are you still using TRIM, LEFT, and vertical bar operators to concatenate strings Its time to modernize and streamline that clumsy code by using the string concatenation functions introduced in SASreg 9. This paper is an overview of the CAT, CATS, CATT, and CATX functions introduced in SASreg 9, and the new CATQ function added in SASreg 9.2. In addition to making your code more compact and readable, this family of functions also offers some new tricks for accomplishing previously cumbersome tasks. SASreg Abbreviations: a Shortcut for Remembering Complicated Syntax SASreg Abbreviations: a Shortcut for Remembering Complicated Syntax Yaorui Liu, Department of Preventive Medicine, University of Southern California ABSTRACT One of many difficulties for a SASreg programmer is remembering how to accurately use SAS syntax, especially the ones that include many parameters. Not mastering the basic syntax parameters by heart will definitely make onersquos coding inefficient because one would have to check the SAS reference manual constantly to ensure that onersquos syntax was implemented properly. One of the more useful tools in SAS, but seldom known by novice programmers, is the use of SAS Abbreviations. It allows users to store text strings, such as the syntax of a DATA step function, a SAS procedure, or a complete DATA step, with a user-defined and easy-to-remember abbreviated term. Once this abbreviated term is typed within the enhanced editor, SAS will automatically bring-up the corresponding stored syntax. Knowing how to use SAS Abbreviations will ultimately be beneficial to programmers with varying levels of SAS expertise. In this paper, various examples by utilizing SAS Abbreviations will be demonstrated. Implementation of Good Programming Practices in Clinical SAS SASreg Base software provides users with many choices for accessing, manipulating, analyzing, and processing data and results. Partly due to the power offered by the SAS software and the size of data sources, many application developers and end-users are in need of guidelines for more efficient use. This presentation highlights my personal top ten list of performance tuning techniques for SAS users to apply in their applications. Attendees learn DATA and PROC step language statements and options that can help conserve CPU, IO, data storage, and memory resources while accomplishing tasks involving processing, sorting, grouping, joining (merging), and summarizing data. Sorting a Bajillion Records: Conquering Scalability in a Big Data World quotBig dataquot is often distinguished as encompassing high volume, velocity, or variability of data. While big data can signal big business intelligence and big business value, it also can wreak havoc on systems and software ill-prepared for its profundity. Scalability describes the ability of a system or software to adequately meet the needs of additional users or its ability to utilize additional processors or resources to fulfill those added requirements. Scalability also describes the adequate and efficient response of a system to increased data throughput. Because sorting data is one of the most common as well as resource-intensive operations in any software language, inefficiencies or failures caused by big data often are first observed during sorting routines. Much SASreg literature has been dedicated to optimizing big data sorts for efficiency, including minimizing execution time and, to a lesser extent, minimizing resource usage (i. e. memory and storage consumption.) Less attention has been paid, however, to implementing big data sorting that is reliable and robust even when confronted with resource limitations. To that end, this text introduces the SAFESORT macro that facilitates a priori exception handling routines (which detect environmental and data set attributes that could cause process failure) and post hoc exception handling routines (which detect actual failed sorting routines.) If exception handling is triggered, SAFESORT automatically reroutes program flow from the default sort routine to a less resource-intensive routine, thus sacrificing execution speed for reliability. However, because SAFESORT does not exhaust system resources like default SAS sorting routines, in some cases it performs more than 200 times faster than default SAS sorting methods. Macro modularity moreover allows developers to select their favorite sorting routine and, for data-driven disciples, to build fuzzy logic routines that dynamically select a sort algorithm based on environmental and data set attributes. SAS integration with NoSQL database We are living in the world of abundant data, so called ldquobig datardquo. The term ldquobig datardquo is closely associated with any structured data ndash unstructured, structured and semi-structured. They are called ldquounstructuredrdquo and ldquosemi-structuredrdquo because they do not fit neatly in a traditional row-column relational database. A NoSQL (Not only SQL or Non-relational SQL) database is the type of database that can handle any structured data. For example, a NoSQL database can store any structured data such as XML (Extensible Markup Language), JSON (JavaScript Object Notation) or RDF (Resource Description Framework) files. If an enterprise is able to extract any structured data from NoSQL databases and transfer it to the SAS environment for analysis, it will produce tremendous value, especially from a big data solutions standpoint. This paper will show how any structured data is stored in the NoSQL databases and ways to transfer it to the SAS environment for analysis. First, the paper will introduce the NoSQL database. For example, NoSQL databases can store any structured data such as XML, JSON or RDF files. Secondly, the paper will show how the SAS system connects to NoSQL databases using REST (Representational State Transfer) API (Application Programming Interface). For example, SAS programmers can use the PROC HTTP option to extract XML or JSON files through REST API from the NoSQL database. Finally, the paper will show how SAS programmers can convert XML and JSON files to SAS datasets for analysis. For example, SAS programmers can create XMLMap files using XMLV2 LIBNAME engine and convert the extracted XML files to SAS datasets. DS2 Versus Data Step: Efficiency Considerations There is recognition that in large, complex systems the advantages of object-oriented concepts available in DS2 of modularity, code reuse and ease of debugging can provide increased efficiency. Object-oriented programming also allows multiple teams of developers to work on the same project easily. DS2 was designed for data manipulation and data modeling applications that can achieve increased efficiency by running code in threads, splitting the data across multiple processors and disks. Of course, performance is also dependent on hardware architecture and the amount of effort you put into the tuning of your architecture and code. Join our panel for a discussion of architecture, tuning and data size considerations in determining if DS2 is the more efficient alternative. Using Shared Accounts in Kerberized Hadoop Clusters with SASreg: How Can I Do That Using shared accounts to access third-party data servers is a common architecture in SASreg environments. SAS software can support seamless user access to shared accounts in databases such as Oracle, via group definitions and outbound authentication domains in Metadata. However, the configurations necessary to leverage shared accounts in Hadoop clusters with Kerberos authentication are more complicated. Not only must Kerberos tickets be generated and maintained in order to simply access the Hadoop environment, but those tickets must allow access as the shared account instead of the individual usersrsquo accounts. Methods for implementing this arrangement in SAS environments can be non-intuitive. This paper starts by outlining several general architectures of shared accounts in Kerberized Hadoop environments. It then presents possible methods of managing such shared account access in SAS environments, including specific implementation details, code samples and security implications. Finally, troubleshooting methods are presented for when issues arise. Example code and configurations for this paper were developed on a SAS 9.4 system running over Redhat Enterprise Linux 6. What just happened A visual tool for highlighting differences between two data sets. Base SAS includes a great utility for comparing two data sets - PROC COMPARE. The output though can be hard to read as the differences between values are listed separately for each variable. Its hard to see the differences across all variables for the same observation. This talk presents a macro to compare two SAS data sets and display the differences in Excel. PROC COMPARE OUT option creates an output data set with all the differences. This data set is then processed with PROC REPORT using ODS EXCEL and colour highlighting to show the differences in an Excel, making the differences easy to see. Tips and Tricks for Producing Time-Series Cohort Data Developers working on a production process need to think carefully about ways to avoid future changes that require change control, so its always important to make the code dynamic rather than hardcoding items into the code. Even if you are a seasoned programmer, the hardcoded items might not always be apparent. This paper assists in identifying the harder-to-reach hardcoded items and addresses ways to effectively use control tables within the SASreg software tools to deal with sticky areas of coding such as formats, parameters, groupinghierarchies, and standardization. The paper presents examples of several ways to use the control tables and demonstrates why this usage prevents the need for coding changes. Practical applications are used to illustrate these examples. The Power of the Function Compiler: PROC FCMP PROC FCMP, the user-defined function procedure, allows SAS users of all levels to get creative with SAS and expand their scope of functionality. PROC FCMP is the superhero of all SAS functions in its vast capabilities to create and store uniquely defined functions that can later be used in data steps. This paper outlines the basics as well as tips and tricks for the user to get the most out of this procedure. Creating Viable SASreg Data Sets From Survey Monkeyreg Transport Files Survey Monkey is an application that provides a means for creating online surveys. Unfortunately, the transport (Excel) file from this application requires a complete overhaul in order to do any serious data analysis. Besides having a peculiar structure and containing extraneous data points, the column headers become very problematic when importing the file into SAS. In fact, the initial SAS data set is virtually unusable. This paper explains a systematic approach for creating a viable SAS data set for doing serious analysis. Document and Enhance Your SASreg Code, Data Sets, and Catalogs with SAS Functions, Macros, and SAS Metadata Roberta Glass and Louise Hadden Discover how to document your SASreg programs, data sets, and catalogs with a few lines of code that include SAS functions, macro code, and SAS metadata. Do you start every project with the best of intentions to document all of your work, and then fall short of that aspiration when deadlines loom Learn how your programs can automatically update your processing log. If you have ever wondered who ran a program that overwrote your data, SAS has the answer And If you donrsquot want to be tracing back through a yearrsquos worth of code to produce a codebook for your client at the end of a contract, SAS has the answer Donrsquot Get Blindsided by PROC COMPARE For a statistical programmer in the pharmaceutical industry each work day is new. A project you have been working on for a few months can be changed at a momentrsquos notice and you need to implement changes quickly and accurately. To ensure that the desired changes are done quickly, and most especially accurately, if the task entails doing a find and replace sort of thing in all the SAS Programs in a directory (or multiple directories) a macro called ldquoReplacerrdquo could come to the rescue. Process Flow: First, it reads all the SAS programs in a directory one by one and converts every SAS program to a SAS dataset using grepline. After this, it reads all datasets, one by one. replacing an existing string with the now desired string using if then conditional logic. Finally, it outputs each updated SAS dataset as a new SAS program at a desired location which has been specified. This macro has multiple parameters which you can specify: the input directory the output directory and the from and to strings which gives the programmer more control over the process. A quick example of the practical use of the replacer macro is ndash when making the transition from a Windows to UNIX Server we needed to make sure we changed the path of our init. sas and changed all forward slashes() to backward slashes ().Letrsquos assume we have 100 programs and we decide to do this manually. It can be a cumbersome task and given time constraints, accuracy is not guaranteed. The programmer may end up spending a couple of hours to complete the necessary changes to each program before re-running all the programs to make sure the appropriate changes have taken place. Replacer can accomplish this same task in less than 2 minutes. Ditch the Data Memo: Using Macro Variables and Outer Union Corresponding in PROC SQL to Create Data Set Summary Tables Data set documentation is essential to good programming and for sharing data set information with colleagues who are not SAS programmers. However, most SAS programmers dislike writing memos which must be updated each time a dataset is manipulated. Utilizing two tools, macro variables and the outer union corresponding set operator in PROC SQL, we can write concise code that exports a single summary table containing important data set information serving in lieu of data memos. These summary tables can contain the following data set information and much more: 1) Report in the change in the number of records in a dataset due to dropping records, collapsing across IDs, removing duplicate records 2) summary statistics of key variables and 3) trends across time. This presentation requires some basic understanding of macros and SQL queries. File Management Using Pipes and X Commands in SASreg SAS for Windows can be an extremely powerful piece of software, not only for analyzing data, but also for organizing and maintaining output and permanent datasets. By employing pipes and operating system (lsquoXrsquo) commands within a SAS session, you can easily and effectively manage files of all types stored on your local network. Handling longitudinal data from multiple sources: experience with analyzing kidney disease patients Elani Streja and Melissa Soohoo Analyses in health studies using multiple data sources often come with a myriad of complex issues such as missing data, merging multiple data sources and date matching. Combining multiple data sources is not straight forward, as often times there is discordance or missing information such as dates of birth, dates of death, and even demographic information such as sex, race, ethnicity and pre-existing comorbidities. It therefore becomes essential to document the data source from which the variable information was retrieved. Analysts often rely on one resource as the dominant variable to use in analyses and ignore information from other sources. Sometimes, even the database thought to be the ldquogold standardrdquo is in fact discordant with other data sources. In order to increase sensitivity and information capture, we have created a source variable, which demonstrates the combination of sources for which the data was concordant and derived. In our example, we will show how to resolve address information on date of birth, date of death, date of transplant, sex and race combined from 3 data sources with information on kidney disease patients. These 3 sources include: the United States Renal Data System, Scientific Registry of Transplant Recipients, and data from a large dialysis organization. This paper focuses on approaches of handling multiple large databases for preparation for analyses. In addition, we will show how to summarize and prepare longitudinal lab measurements (from multiple sources) for use in analyses. An Array of Fun: Macro Variable Arrays Like all skilled tradespeople, SASreg programmers have many tools at their disposal. Part of their expertise lies in knowing when to use each tool. In this paper, we use a simple example to compare several common approaches to generating the requested report: the TABULATE, TRANSPOSE, REPORT, and SQL procedures. We investigate the advantages and disadvantages of each method and consider when applying it might make sense. A variety of factors are examined, including the simplicity, reusability, and extensibility of the code in addition to the opportunities that each method provides for customizing and styling the output. The intended audience is beginning to intermediate SAS programmers. Something Old, Something New. Flexible Reporting with DATA Step-based Tools The report looks simple enoughmdasha bar chart and a table, like something created with the GCHART and REPORT procedures. But, there are some twists to the reporting requirements that make those procedures not quite flexible enough. The solution was to mix quotoldquot and quotnewquot DATA step-based techniques to solve the problem. Annotate datasets are used to create the bar chart and the Report Writing Interface (RWI) to create the table. Without a whole lot of additional code, an extreme amount of flexibility is gained. The goal of this paper is to take a specific example of a couple generic principles of programming (at least in SASreg): 1. The tools you choose are not always the most obvious ones ndash So often, other from habit of comfort level, we get zeroed in on specific tools for reporting tasks. Have you ever heard anyone say, ldquoI use TABULATE for everythingrdquo or ldquoIsnrsquot PROC REPORT wonderful, it can do anythingrdquo While these tools are great (Irsquove written papers on their use), itrsquos very easy to get into a rut, squeezing out results that might have been done more easily, flexibly or effectively with something else. 2. Itrsquos often easier to make your data fit your reporting than to make your reporting fit your data ndash It always takes data to create a report and itrsquos very common to let the data drive the report development. We struggle and fight to get the reporting procedures to work with our data. There are numerous examples of complicated REPORT or TABULATE code that works around the structure of the data. However, the data manipulation tools in SAS (data step, SQL, procedure output) can often be used to preprocess the data to make the report code significantly simpler and easier to maintain and modify. Proc Document, The Powerful Utility for ODS Output The DOCUMENT procedure is a little-known procedure that can save you vast amounts of time and effort when managing the output of your SASreg programming efforts. This procedure is deeply associated with the mechanism by which SAS controls output in the Output Delivery System (ODS). Have you ever wished you didnrsquot have to modify and rerun the report-generating program every time there was some tweak in the desired report PROC DOCUMENT enables you to store one version of the report as an ODS Document Object and then call it out in many different output forms, such as PDF, HTML, listing, RTF, and so on, without rerunning the code. Have you ever wished you could extract those pages of the output that apply to certain ldquoBY variablesrdquo such as State, StudentName, or CarModel With PROC DOCUMENT, you have where capabilities to extract these. Do you want to customize the table of contents that assorted SAS procedures produce when you make frames for the table of contents with HTML, or use the facilities available for PDF PROC DOCUMENT enables you to get to the inner workings of ODS and manipulate them. This paper addresses PROC DOCUMENT from the viewpoint of end results, rather than provide a complete technical review of how to do the task at hand. The emphasis is on the benefits of using the procedure, not on detailed mechanics. There will be a number of practical applications presented for everyday real life challenges that arise in manipulating output in HTML, PDF and RTF formats. A SAS macro for quick descriptive statistics Arguably, the most required table in publications is the description of the sample table, fondly referred to among statisticians as ldquoTable 1rdquo. This table displays means and standard errors, medians and IQRs, and counts and percentages for the variables in the sample, often stratified by some variable of interest (e. g. disease status, recruitment site, sex, etc.). While this table is extremely useful, the construction of it can be time consuming and, frankly, rather boring. I will present two SAS macros that facilitate the creation of Table 1. The first is a ldquoquick and dirtyrdquo macro that will output the results for Table 1 for nearly every situation. The second is a ldquoprettyrdquo macro that will output a well formatted Table 1 for a specific situation. Controlling Colors by Name Selecting, Ordering, and Using Colors for Your Viewing Pleasure Within SASreg literally millions of colors are available for use in our charts, graphs, and reports. We can name these colors using techniques which include color wheels, RGB (Red, Green, Blue) HEX codes, and HLS (Hue, Lightness, Saturation) HEX codes. But sometimes I just want to use a color by name. When I want purple, I want to be able to ask for purple not CX703070 or H03C5066. But am I limiting myself to just one purple What about light purple or pinkish purple. Do those colors have names or must I use the codes It turns out that they do have names. Names that we can use. Names that we can select, names that we can order, names that we can use to build our graphs and reports. This paper will show you how to gather color names and manipulate them so that you can take advantage of your favorite purple be it lsquopurplersquo, lsquograyish purplersquo, lsquovivid purplersquo, or lsquopale purplish bluersquo. Much of the control will be obtained through the use of user defined formats. Learn how to build these formats based on a data set containing a list of these colors. Tweaking your tables: Suppressing superfluous subtotals in PROC TABULATE PROC TABULATE is a great tool for generating cross tab style reports. Its very flexible but has a few annoying limitations. One is suppressing superfluous subtotals. The ALL keyword creates a total or subtotal for the categories in one dimension. However if there is only one category in the dimension, the subtotal is still shown, which is really just repeating the detail line again. This can look a bit strange. This talk demonstrates a method to suppress those superfluous totals by saving the output from PROC TABULATE using the OUT option. That data set is then reprocessed to remove the undesirable totals using the TYPE variable which identifies the total rows. PROC TABULATE is then run again against the reprocessed data set to create the final table. Indenting with Style Within the pharmaceutical industry, may SAS programmers rely heavily on Proc Report. While it is used extensively for summary tables and listings, it is more typical that all processing is done prior to final report procedure rather than using some of its internal functionality. In many of the typical summary tables, some indenting is required. This may be required to combine information into a single column in order to gain more printable space (as is the case with many treatment group columns). It may also be to simply make the output more aesthetically pleasing. A standard approach it to pad a character string with spaces to give the appearance of indenting. This requires pre-processing of the data as well as the use of the ASISON option in the column style. While this may be sufficient in many cases, it fails for longer text strings that require wrapping within a cell. Alternative approaches that conditionally utilize INDENT and LEFTMARGIN options of a column style are presented. This Quick-tip presentation will describe such options for indenting. Example outputs will be provided to demonstrate the pros and cons of each. The use of Proc Report and ODS is required in this application using SAS 9.4 in a Windows environment. SASreg Office Analytics: An Application In Practice Data Monitoring and Reporting Using Stored Process Mansi Singh, Kamal Chugh, Chaitanya Chowdagam and Smitha Krishnamurthy Time becomes a big factor when it comes to ad-hoc reporting and real-time monitoring of data while the project work is on full swing. There are always numerous urgent requests from various cross-functional groups regarding the study progress. Typically a programmer has to work on these requests along with the study work which can become stressful. To address this growing need of real-time monitoring of data and to tailor the requirements to create portable reports, SASreg has introduced a powerful tool called SAS Office Analytics. SAS Office Analytics with Microsoftreg Add-In provides excellent real-time data monitoring and report generating capabilities with which a SAS programmer can take ad-hoc requests and data monitoring to next level. Using this powerful tool, a programmer can build interactive customized reports as well as give access to study data, and anyone with knowledge of Microsoft Office can then view, customize, andor comment on these reports within Microsoft Office with the power of SAS running in the background. This paper will be a step-by-step guide to demonstrate how to create these customized reports in SAS and access study data using Microsoft Office Add-In feature. Getting it done with PROC TABULATE From state-of-the-art research to routine analytics, the Jupyter Notebook offers an unprecedented reporting medium. Historically tables, graphics, and other output had to be created separately and integrated into a report piece by piece amidst the drafting of the text. The Jupyter Notebook interface allows for the creation of code cells and markdown cells in any kind of arrangement. While the markdown cells admit all the typical sorts of formatting, the code cells can be used to run code within and throughout the document. In this way, report creation happens naturally and in a completely reproducible way. Handing a colleague a Jupyter Notebook file to be re-run or revised is much easier and simpler than passing along at least two files: the code and the text. With the new SAS reg kernel for Jupyter, all of this is possible and more Clinton vs. Trump 2016: Analyzing and Visualizing Sentiments towards Hillary Clinton and Donald Trumprsquos Policies Sid Grover and Jacky Arora The United States 2016 presidential election has seen an unprecedented media coverage, numerous presidential candidates and acrimonious debate over wide-ranging topics from candidates of both the republican and the democratic party. Twitter is a dominant social medium for people to understand, express, relate and support the policies proposed by their favorite political leaders. In this paper, we aim to analyze the overall sentiment of the public towards some of the policies proposed by Donald Trump and Hillary Clinton using Twitter feeds. We have started to extract the live streaming data from Twitter. So far, we have extracted about 200,000 twitter feeds accessing the live stream API of Twitter, using a java program mytwitterscraper which is an open source real-time twitter scraper. We will use SASreg Enterprise Miner and SASreg Sentiment Analysis Studio to describe and assess how people are reacting to each candidatersquos stand on issues such as immigration, taxes and so on. We will also track and identify patterns of sentiments shifting across time (from March to June) and geographic regions. Donor Sentiment Analysis of Presidential Primary Candidates Using SAS In this paper, we explore advantages of using PROC DS2 procedure over the data step programming in SASreg. DS2 is a new SAS proprietary programming language that is appropriate for advanced data manipulation. We explore use of PROC DS2 to execute queries in databases using FED SQL from within the DS2 program. Several DS2 language elements accept embedded FedSQL syntax, and the run-time generated queries can exchange data interactively between DS2 and supported database. This action enables SQL preprocessing of input tables, which effectively allows processing data from multiple tables in different databases within the same query thereby drastically reducing processing times and improving performance. We explore use of DS2 for creating tables, bulk loading tables, manipulating tables, and querying data in an efficient manner. We explore advantages of using PROC DS2 over data step programming such as support for different data types, ANSI SQL types, programming structure elements, and benefits of using new expressions or writing onersquos own methods or packages available in the DS2 system. We also explore high-performance version of the DS2 procedure, PROC HPDS2, and show how one can submit DS2 language statements for execution to either a single machine running multiple threads or to a distributed computing environment, including the SAS LASR Analytic Server thereby massively reducing processing times resulting in performance improvement. The DS2 procedure enables users to submit DS2 language statements from a Base SAS session. The procedure enables requests to be processed by the DS2 data access technology that supports a scalable, threaded, high-performance, and standards-based way to access, manage, and share relational data. In the end, we empirically measure performance benefits of using PROC DS2 over PROC SQL for processing queries in-database by taking advantage of threaded processing in supported data databases such as Oracle. Social Media, Anonymity, and Fraud: HP Forest Node in SASreg Enterprise Minertrade You may encounter people who used SASreg long ago (perhaps in university) or through very limited use in a job. Some of these people with limited knowledgeexperience think that the SAS system is ldquojust a statistics packagerdquo or ldquojust a GUIrdquo, the latter usually a reference to SASreg Enterprise Guidereg or if a dated reference, to (legacy) SASAFreg or SASFSPreg applications. The reality is that the modern SAS system is a very large, complex ecosystem, with hundreds of software products and a diversity of tools for programmers and users. This poster provides a set of diagrams and tables that illustrate the complexity of the SAS system, from the perspective of a programmer. Diagramsillustrations that are provided here include: the different environments that program code can run in cross-environment interactions and related tools SAS Grid: parallel processing SAS can run with files in memory ndash the legacy SAFILE statement and big dataHadoop some code can run in-database. We end with a tabulation of the many programming languages and SQL dialects that are directly or indirectly supported within SAS. Hopefully the content of this poster will inform those who think that SAS is an old, dated statistics package or just a simple GUI. Leadership: More than Just a Position Laws of Programming Leadership As someone studying statistics in the data science era, more and more emphasis is put on illustrious graphs. Data is no longer displayed with a black and white boxplot. Using SASreg MACRO and the Statistical Graphics procedure, you can animate graphs to turn an outdated two variable graph into a graph in motion that shows not only a relation between factors but also a change over time. An even simpler approach for bubble graphs is to use a function in JMP to create colorful moving plots that would typically require many lines of code, with just a few clicks of the mouse. Sentiment Analysis of Opinions about Self-driving cars Swapneel Deshpande and Nachiket Kawitkar Self-driving cars are no longer a futuristic dream. In recent past, Google launched a prototype of the self-driving car while Apple is also developing its own self-driving car. Companies like Tesla have just introduced an Auto Pilot version in their newer version of electric cars which have created quite a buzz in the car market. This technology is said to enable aging or disable people to drive around without being dependent on anyone while also might affecting the accident rate due to human error. But many people are still skeptical about the idea of self-driving cars and thatrsquos our area of interest. In this project, we plan to do sentiment analysis on thoughts voiced by people on the Internet about self-driving cars. We have obtained the data from crowdflowerdata-for-everyone which contain these reviews about the self-driving cars. Our dataset contains 7,156 observations and 9 variables. We plan to do descriptive analysis of the reviews to identify key topics and then use supervised sentiment analysis. We also plan to track and report at how the topics and the sentiments change over time. An Analysis of the Repetitiveness of Lyrics in Predicting a Songrsquos Popularity In the interest of understanding whether or not there is a correlation between the repetitiveness of a songrsquos lyrics and its popularity, the top ten songs from the year-end Billboard Hot 100 Songs chart from 2002 to 2015 were collect. These songs then had their lyrics assessed to determine the count of the top ten words used. These words counts were then used to predict the number of weeks the song was on the chart. The prediction model was analyzed to determine the quality of the model and if word count is a significant predictor of a songrsquos popularity. To investigate if song lyrics are becoming more simplistic over time there were several tests completed in order to see if the average word counts have been changing over the years. All analysis was completed in SASreg using various PROCs. Regression Analysis of the Levels of Chlorine in the Public Water Supply in Orange County, FL This conference provides a range of events that can benefit any and all SAS Users. However, sometimes the extensive schedule can be overwhelming at first glance. With so many things to do and people to see, I have compiled the advice I was given as a novice WUSS and lessons Irsquove learned since. This presentation will provide a catalog of tips to make the most out of anyonersquos conference experience. From volunteering, to the elementary advice of sitting at a table where you do not know anyonersquos name, listeners will be excited to take on all that WUSS offers. Patients with Morbid Obesity and Congestive Heart Failure Have Longer Operative Time and Room Time in Total Hip Arthroplasty More and more patients with total hip arthroplasty have obesity, and previous studies have shown a positive correlation between obesity and increased operative time in total hip arthroplasty. But those studies shared the limitation of small sizes. Decreasing operative time and room time is essential to meeting the increased demand for total hip arthroplasty, and factors that influence these metrics should be quantified to allow for targeted reduction in time and adjusted reimbursement models. This study intend to use a multivariate approach to identify which factors increase operative time and room time in total hip arthroplasty. For the purposes of this study, the American College of Surgeons National Surgical Quality Improvement Program database was used to identify a cohort of over thirty thousand patients having total hip arthroplasty between 2006 and 2012. Patient demographics, comorbidities including body mass index, and anesthesia type were used to create generalized linear models identifying independent predictors of increased operative time and room time. The results showed that morbid obesity (body mass index gt40) independently increased operative time by 13 minutes and room time 18 by minutes. Congestive heart failure led to the greatest increase in overall room time, resulting in a 20-minute increase. Anesthesia method further influenced room time, with general anesthesia resulting in an increased room time of 18 minutes compared with spinal or regional anesthesia. Obesity is the major driver of increased operative time in total hip arthroplasty. Congestive heart failure, general anesthesia, and morbid obesity each lead to substantial increases in overall room time, with congestive heart failure leading to the greatest increase in overall room time. All analyses are conducted via SAS (version SAS 9.4, Cary, NC). Using SAS: Monte Carlo Simulations of Manufactured Goods - Should-Cost Models Should cost modeling, or ldquocleansheetingrdquo, of manufactured goods or services is a valuable tool for any procurement group. It provides category managers a foundation to negotiate, test and drive value addedvalue engineering ideas. However, an entire negotiation can be derailed by a supplier arguing that certain assumptions or inputs are not reflective of what they are currently seeing in their plant. The most straightforward resolution to this issue is using a Monte Carlo simulation of the cleansheet. This enables the manager to prevent any derailing supplier tangents, by providing them with the information in regards to how each input effects the model as a whole, and the resulting costs. In this ePoster, we will demonstrate a method for employing a Monte Carlo simulation on manufactured goods. This simulation will cover all of the direct costs associated with production, labor, machine, material, as well as the indirect costs, i. e. overhead, etc. Using SAS, this simulation model will encompass 60 variables, from nine discrete manufacturing processes, and will be set to automatically output the information most relevant to the category manager. Making Prompts Work for You: Using SAS Enterprise Guide Prompts with Categorization of Output Edward Lan and Kai-Jen Cheng In statistical and epidemiology units of public health departments, SAS codes are often re-used across a variety of different projects for data cleaning and generation of output datasets from the databases. Each SAS user will copy and paste common SAS codes into their own program and use it to generate datasets for analysis. In order to simplify this process, SAS Enterprise Guide (EG) prompts can be used to eliminate the need for the user to edit the SAS code or copy and paste. Instead, the user will be able to enter the desired directory, date ranges, and desired variables to be included in the dataset. In the event of large datasets, however, it is beneficial for these variables to be grouped into categories instead of having the user individually choose the desired variables or lumping all the variables into the final dataset. Using the SAS EG prompt for static lists where the SAS user selects multiple values, variable categories can be created for selection where groups of variables are selected into the dataset. In this paper for novice and intermediate SAS users, we will discuss how macros and SAS EG prompts, using EG 7.1, can be used to automate the process of generating an output dataset where the user selects a folder directory, date ranges, and categories of variables to be included in the final dataset. Additionally, the paper will explain how to overcome issues with integrating the categorization prompt with generating the output dataset. Application of Data Mining Techniques for Determining Factors Associated with Overweight and Obesity Among California Adults This paper describes the application of supervised data mining methods using SAS Enterprise Miner 12.3 on data from the 2013-2014 California Health Interview Survey (CHIS), in order to better understand obesity and the indicators that may predict it. CHIS is the largest health survey ever conducted in any state, which samples California households through random-digit-dialing (RDD). EM was used to apply logistic regression, decision trees and neural network models to predict a binary variable, OverweightObese Status, which determines whether an individual has a Body Mass Index (BMI) greater than 25. These models were compared to assess which categories of information, such as demographic factors or insurance status, and individual factors like race, best predict whether an individual is overweightobese or not. The Orange Lifestyle If you are like many SAS users you have worked with the classical quotoldquot SAS graphics procedures for some time and are very comfortable with the code syntax, workflow approach etc that make for reasonably simple creation of presentation graphics. Then all of a sudden, a job requires the capabilities of the procedures in SAS ODS graphics. At first glance you may be thinking --- quotOK, a few more procedures to learn and a little syntax to learnquot. Then you realize that moving yourself into this arena is no small task. This presentation will overview the options and approaches that you might take to get up to speed fast. Included will be decision trees to be followed in deciding upon a course of action. This paper contains many examples of very simple ways to get very simple things accomplished. Over 20 different graphs are developed using only a few lines of code each, using data from the SASHELP data sets. The usage of the SGPLOT, SGPANEL, and SGSCATTER procedures are shown. In addition, the paper addresses those situations in which the user must alternatively use a combination of the TEMPLATE and SGRENDER procedures to accomplish the task at hand. Most importantly, the use of ODS Graphics Designer as a teaching tool and a generator of sample graphs and code are covered. A single slide in the presentation overviewing the ODS Designer shows everything needed to generated a very complex graph. The emphasis in this paper is the simplicity of the learning process. Users will be able to take the included code and run it immediately on their personal machines to achieve an instant sense of gratification. The paper also addresses the quotODS Sandwichquot for creating output and the use of Proc Document to manipulate it. Exploring Multidimensional Data with Parallel Coordinate Plots Throughout the many phases of an analysis, it may be more intuitive to review data statistics and modeling results as visual graphics rather than numerical tables. This is especially true when an objective of the analysis is to build a sense of the underlying structures within the data rather than describe the data statistics or model results with numerical precision. Although scatterplots provide a means of evaluating relationships, its two-dimensional nature may be limiting when exploring data across multiple dimensions simultaneously. One tool to explore multivariate data is parallel coordinate plots. I will present a method of producing parallel coordinate plots using PROC SGPLOT and will provide examples of when parallel coordinate plots may be very informative. In particular, I will discuss its application on an analysis of longitudinal observational data and results from unsupervised classification techniques. Making SAS the Easy Way Out: Harnessing the Power of PROC TEMPLATE to Create Reproducible, Complex Graphs With high pressure deadlines and mercurial collaborators, creating graphs in the most familiar way seems like the best option. Using post-processing programs like Photoshop or Microsoft Powerpoint to modify graphs is quicker and easier to the novice SAS User or for onersquos collaborators to do on their own. However, reproducibility is a huge issue in the scientific community. Any changes made outside statistical software need to be repeated when collaborator preferences change, the data changes, the journal requires additional elements, and a host of other reasons The likelihood of making errors increases along with the time spent making the figure. Learning PROC TEMPLATE allows one to seamlessly create complex, automatically generated figures and eliminates the need for post-processing. This paper will demonstrate how to do complex graph manipulation procedures in SAS 9.3 or later to solve common problems, including lattice panel plots for different variables, split plots and broken axes, weighted panel plots, using select observations in each panel, waterfall plots, and graph annotation. The examples presented are healthcare based, but the methods are applicable to finance, business and education. Attendees should have a basic understanding of the macro language, graphing in SAS using SGPLOT, and ODS graphics. Customizing plots to your heartrsquos content using PROC GPLOT and the annotate facility This paper introduces tips and techniques that can speed up the validation of 2 datasets. It begins with a brief introduction to PROC COMPARE, then proceeds to introduce some techniques without using automation to that can help to speed up the validation process. These techniques are most useful when one validates a pair of datasets for the first time. For the automation part, QCData is used to compare 2 datasets and QCDir is used to compare datasets in the production directory against their corresponding datasets in the QC directory. Also introduced is ampSYSINFO, a powerful, and extremely useful macro variable which holds a value that summarizes the result of a comparison. Combining Reports into a Single File Deliverable In daily operations of a Biostatistics and Statistical Programming department, we are often tasked with generating reports in the form of tables, listings, and figures (TLFs). A common setting in the pharmaceutical industry is to develop SASreg code in which individual programs generate one or more TLFs in some standard formatted output such as RTF or PDF with a common look and feel. As trends move towards electronic review and distribution, there is an increasing demand for producing a single file as the final deliverable rather than sending each output individually. Various techniques have been presented over the years, but they typically require post-processing individual RTF or PDF files, require knowledge base beyond SAS, and may require additional software licenses. The use of item stores as an alternative has been presented more recently. Using item stores, SAS stores the data and instructions used for the creation of each report. Individual item stores are restructured and replayed at a later time within an ODS sandwich to obtain a single file deliverable. This single file is well structured with either a hyperlinked Table of Contents in RTF or properly bookmarked PDF. All hyperlinks and bookmarks are defined in a meaningful way enabling the end user to easily navigate through the document. This Hands-on-Workshop will introduce the user to creating, replaying, and restructuring item stores to obtain a single file containing a set of tables, listings, and figures. The use of ODS is required in this application using SAS 9.4 in a Windows environment. Getting your Hands on Contrast and Estimate Statements Many SAS users are familiar with modeling with and without random effects through PROC GLM, PROC MIXED, PROC GLIMMIX, and PROC GENMOD. The parameter estimates are great for giving overall effects but analysts will need to use CONTRAST and ESTIMATE statement for digging deeper into the model to answer questions such as: ldquoWhat is the predicted value of my outcome for a given combination of variablesrdquo ldquoWhat is the estimated difference between groups at a given time pointrdquo or ldquoWhat is the estimated difference between slopes for two of three groupsrdquo This HOW will provide a step by step introduction so that the SAS USER will get more comfortable programming ESTIMATE and CONTRAST statements and finding answers to these types of questions. The hands on workshop will focus on statements that can be applied to either fixed effects models or mixed models. Advanced Programming Techniques with PROC SQL Kirk Paul Lafler The SQL Procedure contains a number of powerful and elegant language features for SQL users. This hands-on workshop (HOW) emphasizes highly valuable and widely usable advanced programming techniques that will help users of Base-SAS harness the power of the SQL procedure. Topics include using PROC SQL to identify FIRST. row, LAST. row and Between. rows in BY-group processing constructing and searching the contents of a value-list macro variable for a specific value data validation operations using various integrity constraints data summary operations to process down rows and across columns and using the MSGLEVEL system option and METHOD SQL option to capture vital processing and the algorithm selected and used by the optimizer when processing a query. How to analyze correlated and longitudinal data United States Food and Drug Administration (FDA) requires an annotated Case Report Form (aCRF) to be submitted as part of the electronic data submission for every clinical trial. aCRF is a PDF document that maps the captured data in a clinical trial to their corresponding variable names in the Study Data Tabulation Model (SDTM) datasets. The SDTM Metadata Submission Guidelines recommends that the aCRF should be bookmarked in a specific way. A one-to-one relationship between the bookmarks and aCRF forms is not typical one form may have two or more bookmarks. Therefore, the number of bookmarks can easily reach thousands in any study Generating the bookmarks manually is a tedious, time consuming job. This paper presents an approach to automate the entire bookmark generation process by using SASreg 9.2 and later releases, Ghostscript, a PDF editing tool, and leveraging the linkages between forms and their corresponding visits. This approach could potentially save tremendous amounts of time and the eyesight of programmers while reducing the potential for human error. Did the Protocol Change Work Interrupted Time Series Evaluation for Health Care Organizations. Carol Conell and Alexander Flint Background: Analysts are increasingly asked to evaluate the impact of policy and protocol changes in healthcare, as well as in education and other industries. Often the request occurs after the change is implemented and the objective is to provide an estimate of the effect as quickly as possible. This paper demonstrates how we used time series models to estimate the impact of a specific protocol change using data from the electronic health record (EHR). Although the approach is well established in econometrics, it remains much less common in healthcare: the paper is designed to make this technique available to intermediate level SAS programmers. Methods: This paper introduces the time series framework, terminology, and advantages to users with no previous experience using time series. It illustrates how SAS ETS can be used to fit an interrupted time series model to evaluate the impact of a one-time protocol change based on a real-world example from Kaiser Northern California. Macros are provided for creating a time series database, fitting basic ARMA models using PROC ARIMA, and comparing models. Once the simple time-series structure is identified for this example, heterogeneity in the effect of the intervention is examined using data from subsets of patients defined by the severity of their presentation. This shows how the aggregated approach can allow exploring effect heterogeneity. Conclusions: Aggregating data and applying time series methods provides a simple way to evaluate the impact of protocol changes and similar interventions. When the timing of these interventions is well-defined, this approach avoids the need to collect substantial data on individual level confounders and problems associated with selection bias. If the effect is immediate, the approach requires a very moderate number of time points. Finding Strategies for Credit Union Growth without Mergers or Acquisitions In this era of mergers and acquisitions, community banks and credit unions often believe that bigger is better, that they cant survive if they stay small. Using 20 years of industry data, we disprove that notion for credit unions, showing that even small ones can grow slowly but strongly on their own, without merging with larger ones. We first show how we find this strategy in the data. Then we segment credit unions by size and see how the strategy changes within each segment. Finally, we track the progress of these segments over time and develop a predictive model for any credit union. In the process, we introduce the concept of quotHigh-Performance Credit Unions, quot which do actions that are proven to lead to credit union growth. Code snippets will be shown for any version of SASreg but will require the SASSTAT package. A Case of Retreatment ndash Handling Retreated Patient Data Sriramu Kundoor and Sumida Urval In certain clinical trials, if the study protocol allows, there are scenarios where subjects are re-enrolled into the study for retreatment. As per CDISC guidelines these subjects need to be handled in a manner different from non-retreated subjects. The CDISC SDTM Implementation Guide versions 3.1.2 (Page 29) and 3.2 (Section 4 - page 8) state: ldquoThe unique subject identifier (USUBJID) is required in all datasets containing subject-level data. USUBJID values must be unique for each trial participant (subject) across all trials in the submission. This means that no two (or more) subjects, across all trials in the submission, may have the same USUBJID. Additionally, the same person who participates in multiple clinical trials (when this is known) must be assigned the same USUBJID value in all trials. rdquo Therefore a retreated subject cannot have two USUBJIDs in spite of being the same person undergoing the trial phase more than once. This paper describes (with suitable examples) a method of handling retreated subject data in the SDTMs as per CDISC standards, and at the same time capturing it in such a way that it is easy for the programmer or statistician to analyze the data in ADaM datasets. This paper also discusses the conditions that need to be followed (and the logic behind them) while programming retreated patient data into the different SDTM domains. Why and What Standards for Oncology Studies (Solid Tumor, Lymphoma and Leukemia) Each therapeutic area has its own unique data collection and analysis. Oncology especially, has particularly specific standards for collection and analysis of data. Oncology studies are also separated into one of three different sub types according to response criteria guidelines. The first sub type, Solid Tumor study, usually follows RECIST (Response Evaluation Criteria in Solid Tumor). The second sub type, Lymphoma study, usually follows Cheson. Lastly, Leukemia study follows study specific guidelines (IWCLL for Chronic Lymphocytic Leukemia, IWAML for Acute Myeloid Leukemia, NCCN Guidelines for Acute Lymphoblastic Leukemia and ESMO clinical practice guides for Chronic Myeloid Leukemia). This paper will demonstrate the notable level of sophistication implemented in CDISC standards, mainly driven by the differentiation across different response criteria. The paper will specifically show what SDTM domains are used to collect the different data points in each type. For example, Solid tumor studies collect tumor results in TR and TU and response in RS. Lymphoma studies collect not only tumor results and response, but also bone marrow assessment in LB and FA, and spleen and liver enlargement in PE. Leukemia studies collect blood counts (i. e. lymphocytes, neutrophils, hemoglobin and platelet count) in LB and genetic mutation as well as what are collected in Lymphoma studies. The paper will also introduce oncology terminologies (e. g. CR, PR, SD, PD, NE) and oncology-specific ADaM data sets - Time to Event (--TTE) data set. Finally, the paper will show how standards (e. g. response criteria guidelines and CDISC) will streamline clinical trial artefacts development in oncology studies and how end to end clinical trial artefacts development can be accomplished through this standards-driven process. Efficacy Endpoint Analysis Dataset Generation with Two-Layer ADaM Design Model In clinical trial data processing, the efficacy endpoints dataset design and implementation are often the most challenging process to standardize. This paper introduces a two-layer ADaM design method for generating an efficacy endpoints dataset and summarizes the practices from past projects. The two-layer ADaM design method improves not only implementation and review, but validation as well. The method is illustrated with examples. Strategic Considerations for CDISC Implementation Amber Randall and Bill Coar The Prescription Drug User Fee Act (PDUFA) V Guidance mandates eCTD format for all regulatory submissions by May 2017. The implementation of CDISC data standards is not a one-size-fits-all process and can present both a substantial technical challenge and potential high cost to study teams. There are many factors that should be considered in strategizing when and how which include timeline, study team expertise, and final goals. Different approaches may be more efficient for brand new studies as compared to existing or completed studies. Should CDISC standards be implemented right from the beginning or does it make sense to convert data once it is known that the study product will indeed be submitted for approval Does a study team already have the technical expertise to implement data standards If not, is it more cost effective to invest in training in-house or to hire contractors How does a company identify reliable and knowledgeable contractors Are contractors skilled in SAS programming sufficient or will they also need in-depth CDISC expertise How can the work of contractors be validated Our experience as a statistical CRO has allowed us to observe and participate in many approaches to this challenging process. What has become clear is that a good, informed strategy planned from the beginning can greatly increase efficiency and cost effectiveness and reduce stress and unanticipated surprises. SDD project management tool real-time and hassle free ---- a one stop shop for study validation and completion rate estimation Do you feel sometimes it is like an octopus to work on multiple projects as a lead program or it is hard to monitor whatrsquos going on Perhaps you know about Murphyrsquos Law: Anything that can go wrong will go wrong. And you will want to be the first one to know it before anybody else. Whatrsquos its impact and whatrsquos the downstream process After pulling the study submission package up to SDD, we developed a working process which collects status information of each program and output. Then a SAS program will read in the status report of repository documents and update the tracker with bull Timestamp (last modified, last run) of: o Source and validation program. o Upstream documents (served as input of the program such as raw data or macros). o Downstream documents Features including bull Pinnacle 21 traffic lighting bull Pulling time variables from SDD and building the logic (rawltSDTMltADaM, SourceltValidation) bull Logscan in batch (time estimation on completion) bull Metadata level checking bull The workflow of all these above bull Scheduled job of running the sequenced above tasks bull Study completion report (and algorithm) Building Better ADaM Datasets Faster With If-Less Programming One of major tasks in building ADaM datasets is to write the SAS code to implement the ADaM variables based on an ADaM specification. SAS programmers often find this task tedious, time-consuming and even prone to error. The main reason that the task seems daunting is because a large number of variables have to be created with if-then-else statements in one or more data steps at the same time for each of ADaM datasets. In order to address this common issue and alleviate the process involved, this paper introduces a small set of data step inline macros that allow programmers to derive most of ADaM variables without using if-then-else statements. With this if-less programming approach, a programmer can not only make a piece of ADaM implementation code easy to read and understand, but also makes it easy to modify along with the evolving ADaM specification, and straight to reuse in the development of other ADaM datasets, or studies. Whatrsquos more, this approach can be applied to the derivation of ADaM datasets from both SDTM, and non-SDTM datasets. Whatrsquos Hot ndash Skills for SASreg Professionals Kirk Paul Lafler As a new generation of SASreg user emerges, current and prior generations of users have an extensive array of procedures, programming tools, approaches and techniques to choose from. This presentation identifies and explores the areas that are hot in the world of the professional SAS user. Topics include Enterprise Guide, PROC SQL, PROC REPORT, Output Delivery System (ODS), Macro Language, DATA step programming techniques such as arrays and hash objects, SAS University Edition software, technical support at support. sas, wiki-content on sasCommunity. orgreg, published ldquowhiterdquo papers on LexJansen, and other venues. Creating Dynamic Documents with SASreg in the Jupyter Notebook to Reinforce Soft Skills Experience with technology and strong computing skills continue to be among the most desired qualifications by employers. Programs in Statistics and other especially quantitative fields have bolstered the programming and software training they impart on graduates. But as these skills become more common, there remains an equally important desire for what are often called quotsoft skillsquot: communication, telling a story, extracting meaning from data. Through the use of SASreg in the Jupyter Notebook traditional programming assignments are easily transformed into exercises involving both analytics in SAS and writing a clear report. Traditional reports become dynamic documents which include both text and living SAS reg code that gets run during document creation. Students should never just be writing SAS reg code again. Contributing to SASreg By Writing Your Very Own Package One of the biggest reasons for the explosive growth of R statistical software in recent years is the massive collection of user-developed packages. Each package consists of a number of functions centered around a particular theme or task, not previously addressed (well) within the software. While SAS reg continues to advance on its own, SAS reg users can now contribute packages to the broader SAS reg community. Creating and contributing a package is simple and straightforward, empowering SAS reg users immensely to grow the software themselves. There is a lot of potential to increase the general applicability of SAS reg to tasks beyond statistics and data management, and its up to you Collaborations in SAS Programming or Playing Nicely with Others Kristi Metzger and Melissa R. Pfeiffer SAS programmers rarely work in isolation, but rather are usually part of a team that includes other SAS programmers such as data managers and data analysts, as well as non-programmers like project coordinators. Some members of the team -- including the SAS programmers -- may work in different locations. Given these complex collaborations, it is increasingly important to adopt approaches to work effectively and easily in teams. In this presentation, we discuss strategies and methods for working with colleagues in varied roles. We first address file organization -- putting things in places easily found by team members -- including the importance of numbering programs that are executed sequentially. While documentation is often a neglected activity, we next review the importance of documenting both within SAS and in other forms for the non-SAS users of your team. We also discuss strategies for sharing formats and writing friendly SAS coding for seamless work with other SAS programmers. Additionally, data sets are often in flux, and we talk about approaches that add clarity to data sets and their production. Finally, we suggest tips for double-checking another programmerrsquos code andor output, including the importance of confirming the logic behind variable construction and the use of proc compare in the confirmation process. Ultimately, adopting strategies that ease working jointly helps when you have to review work you did in the past and makes for a better playground experience with your teammates. A Brief Introduction to WordPress for SAS Programmers WordPress is a free, open-source platform based on PHP and MySQL used to build websites. It is easy to use with a point-and-click user interface. You can write custom HTML and CSS if you want, but you can also build beautiful webpages without knowing anything at all about HTML or CSS. Features include a plugin architecture and a template system. WordPress is used by more than 26.4 of the top 10 million websites as of April 2016. In fact, SASreg blogs (hosted at blogs. sas) use the wordPress platform. If you are considering starting a blog to share your love of SAS or to raise the profile of your business and are considering using WordPress, join us for a brief introduction to WordPress for SAS programmers. How to Be a Successful and Healthy Home-Based SAS Programmer in PharmaBiotech Industry Abstract Submission 10 min. Quick Tip Talk WUSS 2016 Educational Forum and Conference September 7-9, 2016 Grand Hyatt San Francisco on Union Square San Francisco, California How to Be a Successful and Healthy Home-Based SAS Programmer in PharmaBiotech Industry Daniel Tsui Parexel International Inc. With the advancement of technology, the tech industry accepts more and more flexible schedules and telecommuting opportunities. In recent years, more statistical SAS programming jobs in PharmaBiotech industry have shifted from office-based to home-based. There has been ongoing debates about how beneficial is the shift. A lot of room is still available for discussion about the pros and cons of this home-based model. This presentation is devoted to investigate these pros and cons as home-based SAS programmer within the pharmabiotech industry. The overall benefits have been proposed in a Microsoft whitepaper based on a survey, Work without Walls, which listed the top 10 benefits of working from home from the employee viewpoint, such as workhome balance, avoid traffic, more productive, less distractions, etc. However, to be a successful home-based SAS programmer in the pharmabiotech industry, some enemies have to be defeated, such as 24 hours on call, performance issues, solitude, advancement opportunities, dealing with family, etc. This presentation will discuss some key highlights. Lora Delwiche and Susan Slaughter SAS Studio is an important new interface for SAS, designed for both traditional SAS programmers and for point-and-click users. For SAS programmers, SAS Studio offers many useful features not found in the traditional Display Manager. SAS Studio runs in a web browser. You write programs in SAS Studio, submit the programs to a SAS server, and the results are returned to your SAS Studio session. SAS Studio is included in the license for Base SAS, is the interface for SAS University Edition and is the default interface for SAS OnDemand for Academics. Both SAS University Edition and SAS OnDemand for Academics are free of charge for non-commercial use. With SAS Studio becoming so widely available, this is a good time to learn about it. An Animated Guide: An introduction to SAS Macro quoting This cartoon like presentation expands materials in a previous paper (that explained how SAS processes Macros) to show how SAS processes macro quoting. It is suggested that the quotmap of the SAS Supervisorquot in this cartoon is a very useful paradigm for understanding SAS macro quoting. Boxes on the map are either subroutines or storage areas and the cartoon allows you to see quotquotedquot tokens flow through the components of the SAS supervisor as code executes. Basic concepts for this paper are: 1) the map of the SAS supervisor 2) the idea that certain parts of the map monitor tokens as they pass through 3) the idea of SAS tokens as rule triggers for actions to be taken by parts of the map 4) macro masking prevents recognition of tokens and the triggering of rules 5) the places in the SAS system where unquoting happens.

No comments:

Post a Comment