The Best Sounds Around

Audiobooks are the new books, and there are plenty of reasons why you should be listening to them instead of reading. With audiobook, you can […]

Continue Reading

About

Nadia Rose Music is an organization related to Music.



Know More


Effective Practice of Musical Instruments


There are different tips and tricks that make our practice more effective and fun. In this video, learn to play instrument with effective practice of […]


Continue Reading

Modifying Spotify’s Streaming Quality

Do you have a Spotify app on your iPhone or Mac device? How does it going? Does it gives a better sound quality? If not, here’s good news for you, you can adjust the sound quality of your Spotify while you stream so that you may enjoy listening to better music!

How to Modify Spotify’s Streaming Quality

Sometimes, you just need to get your music on the go. You can reduce data usage by lowering audio quality so that it uses less of those expensive mobile networks! Check below for the step-by-step guide on how to do it:

On iPhone Device:

  1. Enter Spotify.
  2. Click on the setting icon.
  3. Scroll down and look for the Audio Quality. 
  4. Just below, search for the Wifi and for the Cellular streaming. You have generally three options to choose – (1) Low, (2) Normal, (3) High and (4) Automatic.

When you choose the Automatic, you are allowed to modify the quality of the audio no matter what is the strength of the signal. Moreover, for paid subscribers, there is a 5th option given which is called as the Very High.

On Mac Device:

  1. Get into Spotify.
  2. Look and click on Preferences.
  3. Press on the Audio Quality.
  4. Just below the Audio Quality, you will see there the Streaming Quality.
  5. Check and press the drop-down button located in the right corner of the screen.
  6. Select from the five options available – (1) Low, (2) Normal, (3) High, or (4) Automatic, and (5) Very High for Spotify’s paid subscribers.
Read More →

Common Methods of Starting a Mix

How do you start a mix? – A perfectly legitimate question. Some ponder it and try it out, some don’t waste thought about it and just do everything as usual. It turns out, however, that there are a few starting points that mixing engineers and producers like to pursue.

First of all, it is the case that in many productions the exact point at which editing ends and mixing begins cannot be made out correctly. Even if you use a to-do list or deadline to define a point in time at which all the clipping is over and the sound balance begins – inconsistencies are almost always noticeable in the mixing process. Sometimes they do that because they only step over the masking threshold after processing with EQ, dynamics, and, above all, faders.

Methods To Start A Mix

Bass drum as the first signal

The bass drum as the basis for the mix of an entire track is not only found in music genres in which the kick plays a major role, for example in hip-hop. One explanation for this can be that the instrument is one of the deepest and most energetic and the mix is ​​built up in the spectrum from the bottom up. At least at the beginning. Another explanation could be that the bass drum used to be on channel 1 at the analog console in music production and was recorded on track 1, especially in consoles with simple routing or via direct out on the tape machine – and as the first signal in the mix again on the first Sewer system Even more obvious is that live technicians have carried over the habit of starting the soundcheck with the kick drum into the studio and doing the mixdown.

The advantage of this type is obvious: an essential signal is used as the basis for the mix. But what follows then the snare? Or the bass? And one problem: once shaped as desired, this first signal is unlikely to stay that way. It has to be readjusted, again and again, be it in relation to bass drum/bass or in relation to the attack sound in relation to vocals, guitars, and the like.

Singing as the first signal

“The most important signal first!” – This is how you could explain this approach. And it seems plausible to first shape and polish the part of the music that gets the most attention, i.e. singing/vocals/rap, as needed. This is then the guideline on which all further signals are aligned.

At first glance, that sounds quite reasonable. And on the second too! However, the direction and signals of music have to be right. This is a practicable approach for singer-songwriter music, pop, hit, and hip-hop. The danger here, too, is that the vocal signal itself doesn’t have to sound so “good” and complete in order to work in a mix.

This addresses one of the main problems of the two previous methods: judging signals in the mix in solo mode for a mix is ​​actually a bit clumsy. The topic is definitely something for in-depth discussions among colleagues.

Main microphone as the first signal

Many productions are not recorded with the main microphone. If so, primarily for orchestral and choir productions, it makes sense to work on the stereo system first. If the recording has been done properly here, the mix requirements are quickly kept within narrow limits, and some recorded support microphones do not have to appear in the mix at all.

That seems far away in a rock/pop production? Well: This approach can be transferred, for example, to first starting with the overhead miking of a drum kit in order to mix the drum subgroup.

Overall Mix

Whether it really makes sense to process solo signals in a mix is ​​an open question. Another approach is to roughly create the complete mixture and work your way from a rough mix into the final product. The advantage: At first, the most essential, most important parameters are used, i.e. the level. Then the processing of the level curve by automation or dynamics processing or the setting of frequency-dependent levels (using filters and EQ) can follow. The nice thing about this method is that everything can always be heard in context. However, it is not for everyone to clear the chaos, especially with really complex productions, which quickly contain three-digit track counts. Accordingly, there is an exciting variation:

Main Signals

Instead of creating all the signals for the start of the mix, the basic stock does the same, for example, vocals, bass drum, bass, snare, guitar, and bass – without effects, duplications, background vocals, pads, effect signals, and the like. With this method, the essential parts of a mixture are processed and positioned to such an extent that the sound is balanced. Here, too, it is often noticed that some of the available signals are not necessary at all and only stick the mix together and make it non-transparent. Overall, this approach appears to be one of the most sensible.

Everything else

Of course, there are other approaches to starting with a mix. Did you get to know other methods? How and with what do you start?

Read More →