ECE3340 - HW 4A - Analysis of excerpts from the class
your name, ID
You must answer in the narrative style and in full sentences (good grammar and prose are a plus). Show your thinking and reasoning in your discussion or explanation. Do not just show a number or a graph as your answer. Your work is treated with respect like any technical/scientific article, where figures, tables are used to support the discussion, not the essence. Hence, your discussion is the essence.
Instructor’s comments
are in italic and highlighted in light red.
There are a number in the class who did honest, excellent work
with very high scores. It worthwhile for you to review and see
what classmates did and understand what is expected of this
course.
1A. (40 pts) Choose either 1A (this) or 1B below.
Separate two musical sounds that incidentally overlapped in time.
Listen to the below:
It was a concerto mistake: the second instrument started 1/2-note too soon (~0.5 sec) before the first instrument stopped. You will "sound-shop" (like "photoshop") such that the second sound starts 1/2-note later, immediately after the first sound stops.
See Fourier Tutorial for explanation.
Instructor’s
comments. Although the entire answer was already posted
in this blog, (and
a few unabashedly copied and pasted into HW), one doesnot learn
anything until going through the steps and gain insight. The work
below is selected because the
author described one’s understanding along the way.
If you wonder why some gets so
high score in the class and why the HW asks such “stupid things”
as the below:
You must answer in the narrative style ... Show your thinking and reasoning in your discussion or explanation. Do not just show a number or a graph as your answer.
then read through various examples to know why.
Author’s Answer - bold and highlighted are by Instructor
Instructor’s
comments. Excellent work. This same problem has been
given in many years, and this is the best ever that Intructor has
ever seen because of two things:
1- the dedication of the work in using far more filter bands for
both sounds than all previous
2- the clarity in explanation of the approach, although previous
work also had excellent explanation, this work has slightly more
details such that someone who is
totally unfamiliar with this course can completely understand
what the author did.
If you wish to know what “Show your thinking and reasoning in your discussion or
explanation” is like, read examples in this file and
below is one.
We would like to define an audiodataGet function to help us acquire the audio data of the sample and also its sampling rate. The function would pass two argument to object {x,srate1} so that we could later use in the PSD analysis App.
In[265]:=
To tackle this problem, we would use the PSD
analysis App on the AppPage of this class. Because the two sounds
were overlapped in the middle part of the sample. If we divided
the sound into three parts, the first part have purely sound 1,
the last part has purely sound 2. We would like to separate the
overlap part in the middle part. There are many different ways to
do this. However, we would choose to analysis this sample 4 times.
The first time would be the first portion to note the dominant
harmonics of sound 1 for later extraction. Then, we would go to
the first and middle parts and select exactly those frequencies to
save. It would pick out sound 1 from the combination. Similarly,
we would go to the last portion and analyze sound 2’s harmonics.
Then, we would go to the middle and last parts and pick out
exactly those frequencies to save. We would have sound 2 separated
from the combination.
Below is the procedure of what we did.
We load up the object in the app and work with portion 1.We counted
13 harmonics of sound 1 above the -60dB threshold. So, we would like to put a
passband over those 13 harmonics. The locations of those
passbands are showed below. Also with the 13 passband showed with
1.0 scaling magnitude (show in gray color plot). Notice we have a
0Hz coefficient because we did include a silent part at the
beginning of the snippet. Of course, we wouldn’t include that. We
save the passband for later extraction below.
Instructor’s comments. The instruction suggests 11 bands, the author did with 13 bands, while many in the class did zero bands, copied and pasted from others and claimed as own. Only by dedication to do real work does one discover true knowledge.
Having those 13 passband locations like above, we would go to the first and middle portions and pick exactly those bands. We tried our best to place the passbands at the same locations. The result was pretty good. They looked almost the same. We then saved the mp3. It would be sound 1.
Next, to pick out sound 2, we would go to the last portion of the sample and choose dominant harmonics. We counted there were 10 bands above the -60dB. We chose 10 bands. However, the App said channel 1 was silent. And only output 9 bands in the gray plot of our passband filter (having 1.0 scaling magnitude). We saved the locations of those band for later extraction. We didn’t know which band was lost, so we would pick exactly 10 bands later.
We then went to the middle and last portion of the sample and put the passbands at exact locations that we got from analyzing the last portion. We tried our best to replicate the location of those passbands. We got a pretty similar result. We then save the mp3 file. The file would be sound 2 separated from the overlapping.
Lastly, lest check the result of our sound separation.
Out[384]=
Sound 1 | Sound 2 |
![]() |
![]() |
Instructor’s
comments. Excellent work, except:
1- sound 2 is a bit short. The author might have made the mistake
of not selecting the whole second of the second note
2- the author forgot one last thing. Put the two sounds together.
But these are such a minor things. It is the approach how to do
the work and the presentation how it is done are what most
important.
1B. (40 pts + 20 pts bonus) Choose either 1B
(this) or 1A above.
Removing white noise from your voice
details and background of the question (open to see)
Author’s answer with Instructor’s comments
Instructor’s comments. Excellent work, demonstrated both technical understanding and dedication. Note how the author makes subsections to delineate each step.
Data Collection:
In[1]:=
In[4]:=
In[5]:=
In[2]:=
Out[2]=
In[3]:=
I used a similar method of ensuring the original recording is saved by using the method coded above. I tried to speak clearly and loudly enough to help aid in the reconstruction of the signal after noise is added.
White Noise, Interference, and Total Signal Creation:
In[4]:=
In[9]:=
Out[9]=
In[6]:=
In[12]:=
Out[12]=
In[8]:=
In[9]:=
Out[9]=
The final signal created, “totalSignal”, is much more cluttered than the original with an added frequency component at 2000 [Hz] and a Laplacian distribution of noise so it should take some time to remove the additional signals. The original recording is still heard well enough and I am hoping that is a good sign when filtering.
Spectrogram and Signal Time Separation of Initial Clean Signal:
Instructor’s comments. Note that the author understand the code well enough to spot typo errors to correct and use. The HW sometimes are designed to have little mistakes here and there to see if students can overcome. This one is not intended but it shows user’s understanding.
In[10]:=
Out[17]=
Time Separation:
“I” : t = 0.05 - 0.50 [sec]
“am”: t = 0.50 - 0.85 [sec]
“Ma”:
t = 0.85 - 1.05 [sec]
“x”
: t = 1.05 - 1.2 [sec]
“well”
: t = 1.2 - 1.65 [sec]
Instructor’s comments. As everyone’s name is different, the intelligence is in the decomposition of the sound, and this author had the insight to know one of the sound is actually a phoneme snippet (a unit of language pronunciation) although it is just a single consonant.
The spectrogram plot is a very useful tool especially when some words seem to flow together. In my case, “I” and “am” are almost slurred together, but in the spectrogram, the small nuances are made more apparent and shows how the application is helpful in partitioning the signal fragments. The way I said “I” also seems to increase in some frequency components as I finish the word and this can be seen at time segment 0.4 - 0.5 [s].
Instructor’s comments. This is what it means to be a scientist or engineer: making observation and gaining insight.
Using 5 Band Filter Application to Filter Data:
Instructor’s comments. The author did a bit overkill with this approach, however, it is still very good. For complex sound in human language, it is OK to use broadband filter, even with noises.The author could have used just a few broader bands to save some effort. But this shows excellent dedication into the work.
1. “I” separation:
Linear PSD Plot:
The plot above is shown to compare how closely my selected bands were to the strongest frequency components.
Selected Bands:
14 bands were selected to create a resemblance of me saying “I”.
Out[18]=
2. "Am" separation :
Linear PSD Plot:
Selected Bands:
15 bands were selected and I think this segment was recreated quite well.
Out[19]=
3. "Ma"
separation :
Linear PSD Plot:
Instructor’s comments. For this, the author should have definitely used broadband: the group of lines are clumped together to form a phoneme typical of human language.
Selected Bands :
14 bands were used to recreate the “** ” sound, but I do not feel that this part of the segment was the best recreation.
Out[20]=
4. “x” Separation:
So, this is a bit of a conundrum. It seems that I
am fortunate and also at a disadvantage at the same time. Since
the “X” sound
sounds like static itself, it would be close to impossible and or
pointless to try and find bands and select the frequencies that
comprise the “x”
sound; however, since they do sound similar I can merely remove
the beep at 2000 [Hz] from the sound and hopefully get a
convincing ”x”
sound.
Picture Reasoning:
Instructor’s comments. Excellent insight, this is what “thinking and reasoning” means. The result approach is quite good.
Selected Bands:
I removed the 2000 [Hz] component from the “x” fragment as well as choosing these two wide bands because I found that these two bands gave a good balance of high and low frequencies to produce a convincing “x” sound. In the final signal synthesis, I will reduce the amplitude of the “x” sound because it sounds too loud compared to the other fragments created.
Out[21]=
5. “Well”
Separation:
Linear PSD Plot:
Selected Bands:
13 bands have been selected for this segment of the original signal.
Out[22]=
Each of the above signal fragments were reconstructed, I would say, to a passing version of the original recording. I believe that combining the individual fragments to a complete signal will also aid in how similar it sounds to the recorded signal. From my own experience, it seems that obtaining a somewhat even spread of frequency components in each time segment produced the best results. This can be seen quite well in the fragments “I” and “am”. In the cases that did not have a more even spread of frequency components, “well” and “ma”, their recreation was not ass successful. Not all time segments could get this even spread, however, because not each signal had strong frequency components somewhat evenly spread throughout.
Data Extraction Pt 2 :
Instructor’s comments. If anything, the author should start learning to use array in everything. An area to improve.
Out[29]=
The reconstructed signal sounds close to the original recording, but I want to remove the empty space between each fragment to make the signal sound more natural and fluid.
Now using clipper tool in ECEgen_APP_PSD_analysis_M12_v2:
From the audio reconstructed and clipped, I had to reuse the “audiodataGet” function to separate the data in order to combine each of the data sets.
Final signal Recreated vs Original Comparison:
Final Signal Recreated:
Out[46]=
Original :
Conclusion:
This exercise was very time consuming as well as difficult; however, I am proud of what I was able to recreate with under 20 fourier components in each signal fragment. I will say the added difficulty in creating my signal was due to the “x” portion that was essentially noise, but I believe my solution is sufficient. The most difficult signal to recreate was the “well” portion and I think it was because of the “w” and also because not as many defined peaks were presented in the “well” data. This portion of the homework is a good example of how difficult noise removal can be from a very noisy signal and still not create a perfect replica!
Instructor’s comments. No. Fundamentally, once noise is added, there is no such a thing as perfect extraction, it is impossible by the definition of “noise”. Only the reduction of signal-to-noise ratio is relevant as a figure of merit. Overall, this is outstanding work demonstrating exceptional understanding and skills in Fourier signal processing.
2. (20 pts) Image processing
Instructor’s comments. This is designed as a “gimme” 20 points and most do get 20 pts. But this work gets extra because of it does exactly what the HW asks (but ignored by many in the class):
You must answer in the narrative style and in full sentences... Show your thinking and reasoning in your discussion or explanation. ...
2.1 (10 pts) A portrait for the work
Take a portrait of yourself with decent resolution and details to do this work. Limit the filesize to <= 5 MB. Reduce the image size for ease of processing and file size, below is an example:
You can execute the above in another notebook and paste the image here. If you include the original image, your HW file size may be too big. Next, add your name (first & last in any order you wish) with the style you like (pick color, font, face, etc). See help.
Solution:
First, in another Mathematica Note book we perform
the following function to reduce the image filesize to minimize
the overall size of our file:
imgPortrait0=ImageResize[“image copied here”,400]
We then copy the reduced image to the following variable
imgPortrait0:
Figure 2.1.1: Photograph of Myself with Name on Photo.
Instructor’s comments. note how each figure should have a caption like in a report?
Next, as can be seen in the lines of text above, our objective is to place our name, in any color/size/font of text we desire onto the photo we took. We do this using the Graphics function, and alter our text style, size, etc. using the Text function.
2.2 (10 pts) Fourier of Fourier:
What happens if we Fourier an object twice? i. e. Fourier of Fourier?
Out[30]=
Apply Fourier of Fourier to your selfie in 2.1 above and show the result.
Solution:
Instructor’s comments. If you are one of those who are of the strong, quiet type who only posted a pict and no caption, no explanation, nothing, nada... read the below to understand what it means “describing and explaining your work”
The first step in solving part 2.1 is to separate our photo with our name on it into the three layers of a photo. A photo is created with a 3D matrix. The matrix has 3 “layers’: Red, Green and Blue. Each color layer then has a 2D matrix, and each entry has a vallue with the “intensity” of the that color that corresponds to that pixel in the photo. When we put these all together, it creates the color needed for each pixel to create the original image. So sepearte the photo into our three RGB layers with variable “imgRGB”, and then we pull the image data from that variable into “imgdata”. Then we perform our first Fourier Transform of that photo data with variable imgFT. This information is in the form of complex numbers, so next we get the Absolute value of these values with variable imgFTAbs. Next, we get the phase angle of each complex number in our matrix of our Fourier Transform with variable imgFTφ. Now we visualize imgFTAbs and imgFTφ side by side with the Grid function (after combining our separated RGB image with function ColorCombine). As can be seen, the (left) absolute value variable matrix is mostly black in color, except at the edges where we see some colors. The (right) phase value variable matrix looks like static, gradients of random black and white.
In[115]:=
Out[120]=
![]() |
![]() |
Now we will perform our second Fourier’s Transform on our image to see what we get. After we get our second Fourier’s Transform, we use the Chop function to remove the small, residual, relatively insignificant imaginary values of our complex numbers. Again, we use the Grid and ColorCombine functions to show our original photo next to our photo that has been Fourier Transformed twice. As we can see, we know now that Fourier Transforming an image twice just inverts it by 180 degrees. Which is exactly what we would expect! Because we know that Fourier Transforming any value twice is just the inverse (or negative) quantity of the original value.
Instructor’s comments. Strictly speaking, it is NOT 180 degrees, which implies a rotation. That applies ONLY in 2D. More generally, it is just an inversion, period. f[x] --> f[-x] and x can be a multidimensional vector.
In[121]:=
Instructor’s comments. This work perfectly meets the intended objectives: Don’t just use the given code, but also look up to understand what it means. This is how one learns coding or using high-level language.
3. (40 pts) Numerical exercise: steps of DFT
The objective of
this problem is to guide you through various typical steps of numerical DFT. The codes
are given explicitly (in a tutorial file) and it is very easy to
translate into Python in a Jupyter notebook if you wish.
You must have a brief and simple
description or explanation of each step, (even if it
sounds so easy or silly for you). Just don’t show a bunch of lines
of code and output.
Remember, you MUST type every line of code yourself because an
objective is also for you to
exercise programming.
Instructor’s comments. Again below shows works that meet exactly the intention of the assignment. The codes are given, one only has to type. Perhaps one did the work to retype and many copied and pasted. But the objective is stated: :You must have a brief and simple description or explanation of each step. And too many in the class failed to do just this. See the excellent excerpts below.
3.1 (5 pts) Signal
Consider we have this message:
In[36]:=
Use the bit time (1/bit rate) as a unit time, we can plot the message digitial code:
In[37]:=
Out[38]=
Assume a sampling rate of 8 points per unit time (duration of a bit), use DFT to plot the digital signal spectrum. Suggestion, follow the Fourier tutorial 3. Discuss your observation about the spectrum
3.1 (5 pts) Signal
Answer
Consider we have this message:
In[16]:=
Use the bit time (1/bit rate) as a unit time, we
can plot the message digitial code:
The below code creates the function ‘digitsig’. This
function creates a linear function of our message signal,
‘msg.’ It creates unit steps (or rectangle functions)
where our signal is 1, and nothing where it is 0. I.e.
we start off with two 1’s, so the first part of our signal is 2
rectangle functions of 1.0 amplitude/height, and 0.5
width. We this plot our digitsig function so that we
can visualize our digital message.
In[17]:=
Out[18]=
Figure 3.1.1: Plot of Representation of our Digital Signal for our Message as a function of time.
Assume a sampling rate of 8 points per unit time
(duration of a bit), use DFT to plot the digital signal spectrum.
Suggestion, follow the Fourier tutorial 3. Discuss your
observation about the spectrum
As described above, using a sampling ratio of 8 units per unit
time, we take the DFT of our signal and plot it as a function of
frequency. We can see this transforms our function
slightly. The vertical lines from our rectangle
functions now have a slight slope to them, which also causes the
function’s shapes to come close to crossing over each
other. This let’s us realize that our sampling rate of
8 is very close to our Nyquist Rate, or the minimum rate a finite
signal can be sampled while retaining all it’s information (once
the shapes start to cross over each other all information in these
region’s cannot be retrieved).
In[19]:=
Out[22]=
Figure 3.1.2: Plot of our Digital Signal Spectrum using DFT.
Next, we setup the frequency array for our Fourier Transform. Our delta frequency is equal to one over the length of our message, and variable npt is just the length of our signal. Then our frequency array is a range from 1 to the length of our signal minus one, minus the next lowest integer than half the length of our signal all multiplied by the delta frequency. Then we plot the frequency array to get a visual of it.
In[23]:=
Out[25]=
Figure 3.1.3: Plot of our Frequency Array.
Instructor’s comments. This is what it means “learning”. This problem is designed to force one to learn DFT. This author nails it. Sadly, many in the class just cntrl+C, cntl+V the code and run it.
Now, we perform the Discrete Fourier Transform
(DFT). We know that the DFT is mathematically equal to:
Where the Sum is an approximation of the actual integral that
computers use to simplify the calculation for
integration. For Mathematica, the summing portion of
the expression can be done using the function Fourier[] on the
original signal. Then we just need to approximate the
rest of the equation to get our DFT! We know that
Mathematica uses 1/2*Pi with it’s FourierTransform[] function, so
the function:
Must become:
for our math to work. Hence the formula in our code
below for sigFT is created. We then use the RotateRight
function to shift our function to be centered around zero, and
take the absolute value of our Fourier Transform to remove our
complex numbers. Finally we take the Log of our last
result so we can plot this both linearly and
logarithmically. Finally, we plot both linear and
logarithmic plots so we can visualize them.
In[50]:=
In[54]:=
Out[54]=
![]() |
![]() |
Figure
3.1.4: Linear and Logarithmic Plot of DFT of our
Digital Signal Centered About Origin.
We note above that the logarithmic scaled pot has drops in
two drops in power that can be seen on our plot that are mirrored
in the negative portion of the plot. Similarly, the
linear plot has spikes in power that are mirrored in the negative
portion of the plot.
Finally, we now want to plot the Power Spectral Density of
our sampled signal. We plot the logarithmic DFT of our
signal as a function of our frequency array in the following code
(along with adding some labels, titles, and coloring so that the
plot is easier to read).
In[55]:=
Out[55]=
Figure 3.2.5: Logarithmic Power Spectral Density Plot of Digital Signal.
3.2 (5 pts) Signal with carrier
Now we use a
carrier with frequency fc=125, using direct amplitude modulation.
Obviously, we cannot use the same sampling rate.
1- explain why we cannot use the same sampling rate and expect to
know about the signal with carrier?
2- select sampling rate 1100, redo to obtain the same spectra as
above and explain, (in terms of the Fourier transform theorem in
sub-section 7.4.1 of this), how this
spectrum is different from the previous.
Solution
Instructor’s comments. This is what it means “learning”. Very few in the class quoted Nyquist theorem.
In answer to the first question, we cannot use the
same sampling rate of 8 due to the Nyquist Rate Theorem. This, in
summary, says that the sampling rate cannot be less than 2 times
the bandwidth of your signal or you will lose
information! Now that we’ve added a carrier of
frequency fc=125, a sampling rate of 8 would not allow us to
recover our original signal. So now we up our sampling
rate to 1100 (which is much greater than 2 * 125) so that we can
recover our original signal later once we add it to our carrier
frequency and use the Fourier Transform on the resulting signal.
Below we create the carrier function to create our carrier
frequency, then we set our sampling rate to 1100. Next
we create our time array as we did before, and create the digital
signal of our message. Finally we multiply our carrier
signal by our digital signal to get our final
output. Below is a figure showing what this looks like,
and an audio file of what the signal would sound like (we note
that the audio signal waveform looks almost exactly like our
original digital signal!).
In[56]:=
In[57]:=
Out[63]=
Figure 3.2.1: Digital Signal Figure of Message and Carrier Frequency.
In[64]:=
Out[64]=
Dynamic Figure
3.2.1: Audio Sound of Digital Signal of Message and
Carrier Frequency.
Next, we repeat the process we did before. We
create and plot our frequency array for our digital signal.
In[65]:=
Out[67]=
Figure 3.2.2: Plot of Frequency Array for the DFT of our Digital Signal and Carrier Signal.
Then, we create the Discrete Fourier Transform of said signal, and plot the result logarithmically and linearly.
In[68]:=
In[72]:=
Out[72]=
![]() |
![]() |
Figure
3.2.3: Linear and Logarithmic Plot of DFT of our
Digital Signal and Carrier Signal as a function of frequency.
We note that unlike the DFT of our original signal, this
plot has two even spikes about the y-axis in both plots.
Finally, as we did before, we plot the linear and logarithmic
scaled Power Spectral Density of the DFT of our digital signal as
a function of frequency. As expected, the carrier
frequency has shifted our PSD! It is now centered about
the fc=125 of our carrier signal!
Instructor’s
comments. The intention of this is to demonstrate one of
the Fourier transform theorem:
The author understands this well.
Out[76]=
![]() |
![]() |
Figure 3.2.4: Logarithmic and Linear Plot of Power Spectral Density of Digital Signal and Carrier Signal as a Function of Frequency.
3.3 (5 pts) Signal with carrier + noise
Generate white Gaussian noise of amplitude 1/2 of the signal and add to the signal (the signal is 1, so 1/2 is 0.5). Obtain the spectrum again, and discuss your observation.
Answer
We create a noise with NormalDistribution of amplitude 1/2. Because our signal has amplitude of 1. We would directly put 1/2 as our σ. After that, add noise to signal and plot the noisy signal. Looks very different from the original signal.
Out[231]=
Also, we would like to listen to it. Apparently, a lot of white noise was introduced.
In[232]:=
Out[232]=
Below, the two spectra are of the noisy signal in linear scale and in log scale respectively. Compared to the original spectra, the log scale spectrum are more affected by the noise. whereas in the linear scale, we could still see the dominant harmonies. we could see how this noise is very bad by normal communication standard. On the dB scale plot, the noise killed the small lobes of the signal. We could only see the dominant central spikes.
Instructor’s comments. This is what it means “write your observation”. The author correctly observes that the noise overwhelms the smaller lobes of the signal PSD. This is the meaning and reason of PSD analysis here: to know the noise level and the signal-to-noise ratio.
In[233]:=
Out[237]=
![]() |
![]() |
3.4 (10 pts) Band-pass filter
The two spectra from the above are shown below together in one graph.
Out[162]=
Why don’t we
communicate in baseband (the frequency range in red), but always
use a carrier band (such as 2.4 GHz for wifi, or Bluetooth, or RF
of HF for radio, TV)? There are many reasons, one of which is
that, of course EM waves can have much higher bandwidth (for
transmission capacity) and travel much farther than any other
natural waves that we know of. But interference and noise are also
canonical reasons.
But if we put our signal on a carrier, to receive it, it is best
to use a bandpass filter: a filter that allows only the small
range of frequencies around the carrier frequency. The width of
the frequency range is known as bandwidth, or bw.
Apply a Dirichlet filter from fc-0.5 bw to fc+0.5 bw, for a range
of bw: 5, 10, 20, 30, and calculate and plot the output.
Answer
Instructor’s comments. Some in the class simply copy the figures in the blog and passed as one’s own. Not even an attempt to run the code. Below is the kind of discussion expected for this assignment.The high-lighted portion below shows the understanding of the concept of trade-off: larger BW recovers more of signal but also with more noises. This is the objective why we demonstrate with different BWs.
To filter the signal, we would create bandpass filters. Because we
are modelling an ideal case. We would use brickwall model for our
bandpass filter. Thus, passband would multiply with 1 and stopband
would multiply with 0. We did this to kill all the signal outside
of our passband (the band of our interest). The plots showed below are
plots of the first sample (first piece of the signal, first 1 in
our message). We could see that, with a narrower bandwidth of
our bandpass filter, there were less noise on the side of the
signal. And the signal had less noise overall, the curve looks
pretty smooth. However, we lost part of the signal. Remember
that our original signal should look like a rectangular. But in
the plot of bw of 5, it looked like a curve. A smooth curve.
Increasing the bandwidth of the bandpass filter help sharpen up
the edge of the rectangular. The signal looked more resembled
with the overall shape of the original. However, increasing the
bandwidth of the filter introduce more noise. As we
could see, more noise was introduced on the side of the signal,
and the low frequency noise make the horizontal edge of the
rectangular mountainous. This is the trade of that we would face
if we just filter signal with this simple bandpass setup. Also, we
rotated the Fourier of filtered signal because we wished to bring
the band back to the original band. That is moving from the signal
centered at carrier frequency back to the baseband 0Hz frequency
before doing inverse Fourier transformation. Because we didn’t use
any lowpass filter, we could still see the high frequency of the
carrier making up the signal.
3.5 (10 pts) Remove the carrier with low-pass filter
The carrier can
be removed by mixing (heterodyne), but since we are learning
Fourier filter, we will apply that again for fun and learning. We
will use power detection (square of the signal) and then use a
low-pass filter to filter out the original signal, thereby
removing the carrier.
Let the signal above be filtered with a BP filter with BW=30 (just
enough to reduce the noise). Then square the signal just like it
would be with power detector, then filter it again with a low pass
filter. Do it for a series of LP bandwidth: 5, 10, 20. Observe and
discuss the results.
Answer
For this, we would apply a bandpass filter of bandwidth of 30
around the carrier frequency. After passing through the bandpass
filter, we rotate the signal to bring it back to the baseband at
0Hz. Then, we would do an inverse Fourier transform. This would
give us the signals look like which from the section above. Reap
function is just a convenient way to only get the output that
involved the sigoutBB variable (which inside Sow). Then, we get
the Fourier transform of the signal’s power (which is the square
of signal by definition). Again, we rotate the signal when we take
the Fourier transform. Then, the lowpass filter is a brickwall
model. We only take frequency within the range of our interest
(which are 5, 10, 20 respectively). Then, doing inverse Fourier
transform would give 3 output showed below. As we can see,
increasing the lowpass filter’s bandwidth introduce a
lot of noise in the output signal, although the noise floor would
decrease. It’s a matter of preference that we choose which set up.
However, with the relatively low noise level of the 5Hz bandwidth
low pass filter, I think the 5Hz lowpass is good for this
application. We could put a logic gate that read the HIGH/LOW
signal that read HIGH >= 0.25, we would get a clean signal. Of
course, this is just suggestion.
Instructor’s comments. The decode function in 3.5 does what the author mentions “logic gate” - although this “logic gate” concept is naive and simplistic. In communication, it is called “thresholding”. There is the stochastic mathematics behind the communication detection theory that the class may learn later.
In[245]:=
3.6 (5 pts) Decoding the message
What does the original message say?
Answer
As we discussed above, the first output signal (with a 5Hz lowpass filter) would be chosen for decoding because of it’s integrity. We again showed its plot below.
In[251]:=
Out[251]=
We assumed that the secret message was written with ASCII code which utilize 7 bits for one character. Our message has 35 characters, so the ASCII route is pretty viable. We would want to first recreate a clean digital signal from the lowpass filtered output. First, we implement a threshold assignment. If magnitude of signal >= 0.25 we would change it to 1 because we knew that our signal is digital (0 and1 only). We introduce a variable nz, which is the number of samples we took for one input 1/0. This nz would be equal to our sampling rate (1100Hz). So after assignment and getting the digital signal, we would group all the samples of one instance (one character of our message) together. Then, we again group 7 characters (0/1) in our input together in one bundle. That bundle would be a binary represent of the ASCII entry. The output we showed below is a list of partitioned ASCII code. Each sublist is a binary representation of the ASCII code of its character.
In[252]:=
Out[255]=
Using FromDigits function to change the binary base to a decimal base of the ASCII code. Then, we would pass that decimal ASCII code to FromCharacterCode to receive the letters that our message represented. The result is “hello”. Through problem 3, we simulated how to transform a ASCII binary signal to analog signal and do Amplitude Modulation to send over a large distance. We also simulate the case where we got noise into our signal during transmission. Then, we use signal processing techniques which mainly was Fourier transform manipulation to simulate bandpass, lowpass filter and moving the signal back to base band and decode it. Although in reality, the process would have some difference, in which our AM process would add a piece of the carrier frequency and we would apply demodulation by mixing and then detect the signal with amplitude detection (as what I knew from signal and system course). Anyway, this is another option. I learned new things by doing this homework. I know the technique of power detection. The decoding part also made me to research about ASCII code. This is a very interesting homework.
In[257]:=
Out[258]=