we are IEEE Boston

Supporting electrical and electronics engineers throughout all career phases with professional development, education and resources.

Recent News, Announcements, and Upcoming Events

MAR 2025

Catch up on all the news, announcements and events in the monthly online newsletter The Reflector

Mar
22
Sat
Introduction to Neural Networks and Deep Learning (Part 1) @ On-Line
Mar 22 @ 8:30 am – 12:30 pm

Registration Fees:

Members Early Rate:  $115.00

Members Rate after (March 7th):  $130.00

Non-Member Early Rate:  $135.00

Non-Member Rate after(March 7th):  $150.00

 

 

 

Decision to run or cancel the course is:  Friday, March 14, 2025

Speaker:   CL Kim works in Software Engineering at CarGurus, Inc.

Course Format:   Live Webinar, 4.0 hours of instruction! Series Overview:   From the book introduction: “Neural networks and deep learning currently provides the best solutions to many problems in image recognition, speech recognition, and natural language processing.”

This Part 1 and the planned Part 2 (to be confirmed) series of courses will teach many of the core concepts behind neural networks and deep learning.

This is a live instructor-led introductory course on Neural Networks and Deep Learning. It is planned to be a two-part series of courses. The first course is complete by itself and covers a feedforward neural network (but not convolutional neural network in Part 1). It will be a pre-requisite for the planned Part 2 second course. The class material is mostly from the highly-regarded and free online book “Neural Networks and Deep Learning” by Michael Nielsen, plus additional material such as some proofs of fundamental equations not provided in the book.

More from the book introduction:  Reference book: “Neural Networks and Deep Learning” by Michael Nielsen, http://neuralnetworksanddeeplearning.com/ “We’ll learn the core principles behind neural networks and deep learning by attacking a concrete problem: the problem of teaching a computer to recognize handwritten digits. …it can be solved pretty well using a simple neural network, with just a few tens of lines of code, and no special libraries.”

“But you don’t need to be a professional programmer.”

The code provided is in Python, which even if you don’t program in Python, should be easy to understand with just a little effort.

Benefits of attending the series:

* Learn the core principles behind neural networks and deep learning.
* See a simple Python program that solves a concrete problem: teaching a computer to recognize a handwritten digit.
* Improve the result through incorporating more and more core ideas about neural networks and deep learning.
* Understand the theory, with worked-out proofs of fundamental

The demo Python program (updated from version provided in the book) can be downloaded from the speaker’s GitHub account. The demo program is run in a Docker container that runs on your Mac, Windows, or Linux personal computer; we plan to provide instructions on doing that in advance of the class.

(That would be one good reason to register early if you plan to attend, in order that you can receive the straightforward instructions and leave yourself with plenty of time to prepare the Git and Docker software that are widely used among software professionals.)

Course Background and Content:   This is a live instructor-led introductory course on Neural Networks and Deep Learning. It is planned to be a two-part series of courses. The first course is complete by itself and covers a feedforward neural network (but not convolutional neural network in Part 1). It will be a pre-requisite for the planned Part 2 second course. The class material is mostly from the highly-regarded and free online book “Neural Networks and Deep Learning” by Michael Nielsen, plus additional material such as some proofs of fundamental equations not provided in the book.

Outline:

  • Feedforward Neural Networks
  • Simple (Python) Network to classify a handwritten digit
  • Learning with Stochastic Gradient Descent
  • How the backpropagation algorithm work
  • Improving the way neural networks learn:
    • Cross-entropy cost function
    • SoftMax activation function and log-likelihood cost function
    • Rectified Linear Unit
  • Overfitting and Regularization:
    • L2 regularization
    • Dropout
    • Artificially expanding data set

Pre-requisites: There is some heavier mathematics in learning the four fundamental equations behind backpropagation, so a basic familiarity with multivariable calculus and matrix algebra is expected, but nothing advanced is required. (The backpropagation equations can be also just accepted without bothering with the proofs since the provided Python code for the simple network just make use of the equations.) Basic familiarity with Python or similar computer language.

CL Kim works in Software Engineering at CarGurus, Inc. He has graduate degrees in Business Administration and in Computer and Information Science from the University of Pennsylvania. He had previously taught for a few years the well-rated IEEE Boston Section class on introduction to the Android Platform and API.

Slides and zoom link will be emailed to paid attendees by Thursday, March 20, 2025

Mar
26
Wed
MTT-S DML Lecture: “Power Without Pain: High-Power MMIC PA Design, the Pitfalls and How to Avoid Them” @ Virtual
Mar 26 @ 12:00 pm – 1:00 pm

IEEE Boston MTT/AP-S Chapter

Speaker: Dr. Michael Roberg, Qorvo

Register at:

This presentation discusses high power monolithic microwave integrated circuit (MMIC) power amplifier (PA) design in Gallium Arsenide (GaAs) and Gallium Nitride (GaN).  At a high level, GaN versus GaAs semiconductor technology from the perspective of power amplifier design metrics is analyzed to help determine the relative advantages and disadvantages of each technology.  This is followed with an introduction of the most prevalent MMIC design topologies for the bulk of microwave applications which include reactively matched, non-uniform distributed, balanced, push-pull, Doherty and serially combined.  Following introduction of the main topologies, the presentation focuses on the potential pitfalls the MMIC designer can encounter with detailed discussion on how to avoid them with the goal of first past design success.  The presentation relies on experience from the author’s career with over 20 years of experience in the defense and commercial industries as well as academia.  MMIC designers will appreciate the candid explanation of the design topologies and pitfalls while non-designers will come away with a good working knowledge of what can be achieved and what to watch out for.

Speaker Biography: Michael Roberg received the Ph.D. degree from the University of Colorado at Boulder in 2012. From 2003 to 2009, he was an engineer at Lockheed Martin-MS2 in Moorestown, NJ working on advanced phased array radar systems. From 2012 to 2022 he worked for Qorvo in the High Performance Analog business unit as a MMIC Design Engineering Fellow. In 2021, he received the Outstanding Young Engineer award from MTT-S and in 2022 he won the industry paper competition at IMS in Denver. From 2022-2024 he was an Engineering Fellow at mmTron, Inc. where he focused on MMIC development for millimeter wave systems.  Michael re-joined Qorvo as a member of the research organization in 2024 and continues to focus on advanced MMIC development.

Mar
27
Thu
Digital Signal Processing (DSP) for Software Radio @ Zoom
Mar 27 @ 6:00 pm – 6:30 pm

COURSE DESCRIPTION

Digital Signal Processing (DSP) for Software Radio

This course is co-sponsored with The IEEE Northern Virginia Section and the IEEE Baltimore Section

Course Kick-off / Orientation 6:00PM – 6:30PM EDT; Thursday, February 20, 2025

First Video Release, Thursday, February 20, 2025.   Additional videos released weekly in advance of that week’s live session!

Live Workshops:  6:00PM – 7:30PM EDT; Thursdays, February 27, March 6, 13, 20, 27

Registration is open through the last live workshop date.  Live workshops are recorded for later use.

Course Information will be distributed on Thursday, February 20, 2025 in advance of and in preparation for the first live workshop session.  A live orientation session will be held on Thursday, February 20, 2025.

Attendees will have access to the recorded session and exercises for two months (until May 27, 2025) after the last live session ends!

IEEE Member Early Rate (by February 17):  $190.00

IEEE Member Rate (after February 17):  $285.00

IEEE Non-Member Early Rate (by February 17):  $210.00

IEEE Non-Member Rate (after February 17):  $315.00

Decision to run/cancel course:   Thursday, February 13, 2025

Speaker:  Dan Boschen

This is a hands-on course combining pre-recorded lectures with live Q&A and workshop sessions in the popular and powerful open-source Python programming language.

Pre-Recorded Videos:  The course format includes pre-recorded video lectures that students can watch on their own schedule, and an unlimited number of times, prior to live Q&A workshop sessions on Zoom with the instructor. The videos will also be available to the students for viewing for up to two months after the conclusion of the course.

Course Summary

This course builds on the IEEE course “DSP for Wireless Communications” also taught by Dan Boschen, further detailing digital signal processing most applicable to practical real-world problems and applications in radio communication systems. Students need not have taken the prior course if they are familiar with fundamental DSP concepts such as the Laplace and Z transform and basic digital filter design principles.

This course brings together core DSP concepts to address signal processing challenges encountered in radios and modems for modern wireless communications. Specific areas covered include carrier and timing recovery, equalization, automatic gain control, and considerations to mitigate the effects of RF and channel distortions such as multipath, phase noise and amplitude/phase offsets.

Dan builds an intuitive understanding of the underlying mathematics through the use of graphics, visual demonstrations, and real-world applications for mixed signal (analog/digital) modern transceivers. This course is applicable to DSP algorithm development with a focus on meeting practical hardware development challenges, rather than a tutorial on implementations with DSP processors.

Now with Jupyter Notebooks!

This long-running IEEE Course has been updated to

include Jupyter Notebooks which incorporates graphics together with Python simulation code to provide a “take-it-with-you” interactive user experience. No knowledge of Python is required but the notebooks will provide a basic framework for proceeding with further signal processing development using that tools for those that have interest in doing so.

This course will not be teaching Python, but using it for demonstration. A more detailed course on Python itself is covered in a separate IEEE Course routinely taught by Dan titled “Python Applications for Digital Design and Signal Processing”.

All set-up information for installation of all tools used will be provided prior to the start of class.

Target Audience:

All engineers involved in or interested in signal processing for wireless communications. Students should have either taken the earlier course “DSP for Wireless Communications” or have been sufficiently exposed to basic signal processing concepts such as Fourier, Laplace, and Z-transforms, Digital filter (FIR/IIR) structures, and representation of complex digital and analog signals in the time and frequency domains. Please contact Dan at boschen@loglin.com if you are uncertain about your background or if you would like more information on the course.

Benefits of Attending/ Goals of Course:

Attendees will gain a strong intuitive understanding of the practical and common signal processing implementations found in modern radio and modem architectures and be able to apply these concepts directly to communications system design.

Pre-recorded lectures (3 hours each) will be distributed Friday prior to each week’s workshop dates.  Workshop / Q&A sessions are 6:00PM – 7:30PM on the dates listed below.

Kick-off / Orientation:  Thursday, February 20, 2025

Topics / Schedule:

Class 1: Thursday, February 27:  DSP Review, Radio Architectures, Digital Mapping, Pulse Shaping, Eye Diagrams

Class 2:  Thursday, March 6:  ADC Receiver, CORDIC Rotator, Digital Down Converters, Numerically Controlled Oscillators

Class 3: Thursday, March 13:  Digital Control Loops; Output Power Control, Automatic Gain Control

Class 4: Thursday, March 20:  Digital Control Loops; Carrier and Timing Recovery, Sigma Delta Converters

Class 5: Thursday, March 27:  RF Signal Impairments, Equalization and Compensation, Linear Feedback Shift Registers

Speaker’s Bio:

Dan Boschen has a MS in Communications and Signal Processing from Northeastern University, with over 25 years of experience in system and hardware design for radio transceivers and modems. He has held various positions at Signal Technologies, MITRE, Airvana and Hittite Microwave designing and developing transceiver hardware from baseband to antenna for wireless communications systems and has taught courses on DSP to international audiences for over 15 years. Dan is a contributor to Signal Processing Stack Exchange https://dsp.stackexchange.com/, and is currently at Microchip (formerly Microsemi and Symmetricom) leading design efforts for advanced frequency and time solutions.

For more background information, please view Dan’s Linked-In page at: http://www.linkedin.com/in/danboschen

Mar
28
Fri
“High-Voltage Isolation Technology: From Process Development to Circuit Design” @ Tufts University - Joyce Cummings Center - Room 270
Mar 28 @ 3:00 pm – 4:30 pm

On behalf of the IEEE Microsystems Boston Organizing Committee, it’s my pleasure to invite you to our March Tech Talk event. Please find the invitation/flyer attached.

IEEEMicrosystems_TalkFlyer_RuidaYun_032025 (1)

The details of the event are as follows:

Speaker: Dr. Ruida Yun, Senior Manager at Analog Devices

Agenda: 

3 – 3:15 pm: Networking and snacks

3:15 – 4:15 pm: Tech Talk and Q&A

RSVP: Please register at your earliest convenience using this link. This will be an in-person meeting.

Galvanic isolation is a popular way of breaking ground loops in noisy, high voltage environments in various applications where ground currents can disrupt data transmission, damage equipment, and even hurt human operators. This talk will introduce the high voltage isolation technology, where process and circuit are developed together to deliver the best-in-class isolation performance. Various design trade-offs will be examined, and a compact design will be presented to show how to achieve the best performance yet with the highest channel density in the industry.

Speaker Bio:

Dr. Ruida Yun received the B.S. degree from Zhejiang University, China, in 2003, the M.S. degree from the Royal Institute of Technology (KTH), Sweden, in 2006 and the Ph.D. degree from Tufts University, Medford, MA, in 2011, all in electrical engineering. He is currently with Analog Devices Inc., Wilmington, MA, as a senior manager leading the advanced technology team working on high voltage isolation related process and product development. He has authored or coauthored 10 papers and holds 8 patents. His current research interests include high voltage digital isolator, high frequency isolated power converters and high speed circuit design.

Thanks and we look forward to meeting you soon!

Tyler

Director of Publicity, IEEE Microsystems Boston Chapter

Listen to our chapter’s podcast, The Microzone on spotify

Apr
19
Sat
Introduction to Neural Networks and Deep Learning (Part 2 — Convolutional Neural Networks, Basic Language Modeling) @ Zoom
Apr 19 @ 8:30 am – 12:30 pm

Registration Fees:

Members Early Rate (by April 4) $115.00

Members Rate after (April 4) $130.00

Non-Member Early Rate (April 4) $135.00

Non-Member Rate after (April 4):  $150.00

Decision to run or cancel the course is:  Friday, April 11, 2025

Series Overview: Neural networks and deep learning currently provides the best solutions to many problems in image recognition, speech recognition, natural language processing, and generative AI.

The Part 1 class and this Part 2 class will teach many of the core concepts behind neural networks and deep learning, and basic language modeling.

The planned Part 3 class (to be confirmed) will teach a simple Generative Pre-trained Transformer (GPT), based on the seminal Attention is All You Need paper and OpenAI’s GPT-2/GPT-3.

In this Part 2 class, in the first section, we again use a neural network in teaching a computer to recognize handwritten digits. Here we introduce the convolutional neural network. They are predominantly used in computer vision applications, such as for recognizing objects in images.

The second section of the Part 2 class introduces basic language modeling, and simple generation of text based on prior learned text, in this case, baby names.

But you don’t need to be a professional programmer. The demo code provided is in Python, and should be easy to understand with just a little effort.

Reference:

  • Book: Neural Networks and Deep Learning by Michael Nielsen, http://neuralnetworksanddeeplearning.com
  • Video Course: Neural Networks: Zero to Hero by Andrej Karpathy, an OpenAI cofounder, https://karpathy.ai/zero-to-hero.html

Benefits of attending this Part 2 class of the series:

  • Build upon the core principles behind neural networks and deep learning in the Part 1 class to learn about convolutional neural networks.
  • See a simple Python program that solves a concrete problem: teaching a computer to recognize a handwritten digit.
  • Improve the result through incorporating more and more core ideas about neural networks and deep learning.
  • Understand basic language modeling.
  • Implement a simple language model that generates baby names from existing names.
  • Get introduced to the popular PyTorch library.
  • Run straightforward Python demo code examples.

Just as for the Part 1 class, for the first section of the Part 2 class, the demo Python program (updated from version provided in the book) can be downloaded from the speaker’s GitHub account. The demo program is run in a Docker container that runs on your Mac, Windows, or Linux personal computer; we will provide instructions on doing that in advance of the class.

The second section of the Part 2 class is based on a Colab hosted Jupyter Notebook running Python; the link to the file will be shared in advance of the class.

Part 2 class Background and Content: This is a live instructor-led introductory course on Neural Networks and Deep Learning. It is planned to be a three-part series of classes.

Similar to the Part 1 class, which is a pre-requisite, this Part 2 class is also complete by itself. It comprises two sections. Section 1 covers convolutional neural networks. Section 2 covers basic language modeling. It will be a pre-requisite for the planned Part 3 class (to be confirmed) introducing a simple Generative Pre-trained Transformer (GPT).

The Section 1, Part 2, class material is mostly from the same highly-regarded and free online book used for the Part 1 class: Neural Networks and Deep Learning by Michael Nielsen. We add some additional material such as introducing the Residual or Skip connection in a Residual block, which is commonly adopted in many types of deep neural networks.

The Section 2, Part 2, class material is from the sixth video: Building makemore Part 5: Building a WaveNet from the above referenced truly amazing video course series by one of OpenAI’s co-founders, Andrej Karpathy.

Part 2 class Outline:

Section 1 Convolutional Neural Networks.

  • Simple (Python) Network to classify a handwritten digit
    • Local receptive fields
    • Feature map: Shared weights, bias
    • Pooling
  • Demo code using Theano library for learning only
    • Automatic gradient/backprop calculation
    • Weight initialization
  • Quick introduction to PyTorch library
  • AlexNet: Example of a Convolutional Neural Network architecture
  • Residual or Skip connection

Section 2 Basic Language Modeling.

  • Simple language model (generate baby names from existing names)
    • Vocabulary (character-level)
    • Block or Context length – # of tokens (characters) considered in predicting next one
    • Datasets for training, validation, test
    • Multi-layer neural network
      • Embedding layer, Flatten layer, Linear layer, BatchNorm1d layer, Tanh activation
      • Improve Flatten layer with a hierarchical architecture
      • PyTorch’s cross_entropy method to get loss
      • Automatic gradient calculation with Pytorch’s loss.backward method
      • Stochastic Gradient Descent to learn/update parameters

Part 2 class Pre-requisites: The material in the Part 1 class, which requires some basic familiarity with multivariable calculus and matrix algebra, but nothing advanced. Basic familiarity with Python or similar computer language.

Speaker:   CL Kim works in Software Engineering at CarGurus, Inc.

CL Kim works in Software Engineering at CarGurus, Inc. He has graduate degrees in Business Administration and in Computer and Information Science from the University of Pennsylvania. He had previously taught for a few years the well-rated IEEE Boston Section class on introduction to the Android Platform and API.

The Benefits of Online and On-Demand Training with IEEE Boston

1

Expert
Trainers

Online training with an expert gives learners from anywhere in the world high-quality instruction from seasoned professionals.

2

On-Demand Access

With on-demand course offerings, students learn at their own pace, allowing them to revisit topics as needed.

3

Online
Support

Dedicated online support ensures that learners receive prompt assistance.

Reach your full potential as part of the world’s largest technology community. Join professionals, experts, and advisors who can help shape your career, offer resources to acquire new skills, and advance your professional development.

 

Microwave Journal – Stay Connected!

 

IEEE Boston sponsored conferences and symposiums.